[Plugin] unbalanced


Recommended Posts

I have been successfully using unbalance in the past, but that is a few years ago.

When i tried to activate it today, nothing happened. The server did not start. In the logs I see the command used for starting unbalance and no subsequent errors.

 

Any ideas what could cause this and how to fix it?

 

Link to comment
On 7/29/2021 at 7:32 AM, jbrodriguez said:

Thanks, that would be really representative !

You should be able to find disks.ini at /var/local/emhttp/disks.ini.

The tool I mentioned before scrubs the serial id of each disk (id, idSb lines), you may want to remove additional data.

 

notification of reply got lost, thus why it took so long. Used the tool and grabbed the sanitized disks.ini out of it. If you need the other stuff i can check it before attaching.

 

Btw my streamlabs share and tv share use the main array. First i filled the 2 cache pools as much as could and then set them to main array. For using unbalance that might make things weird, but i assume i am not a normal case with this. Btw monthly parity check is still going for another estimated 15 hours. I assume that it would not produce different data, but pointing it out if unraid hides something during check.

disks.ini

Link to comment
23 minutes ago, jbrodriguez said:

First thoughts are that there's a "type" property (Cache), but so far I can't find a prop to group a pool, except for the name.

 

sort by type="Cache" then by ["poolname*"] to group ["tv_pool6"] in with ["tv_pool"] and ["cache2"] with ["cache"]? Unless you can name 2 pools the same in which case :/ 

 

edit-oh i see the name="cache3" also now

Edited by Cull2ArcaHeresy
Link to comment

Hi,

I've been using unBALANCE for quite some time now, it's really a great tool to reorganize data among disks.
I have recently upgraded some of my disks to a higher capacity and hence reallocate their usage.
And now I have an issue as I have some big files of ~100GiB to transfer in this example from disk2 to disk3.
disk2 has 509GB free, and disk3 has 5.2TB free.
Here's the output of the "Plan" phase when I try to transfer more than one file :

PLANNING: Found /mnt/disk2/<sharename>/file1 (108.83 GB)
PLANNING: Found /mnt/disk2/<sharename>/file2 (108.84 GB)
PLANNING: Trying to allocate items to disk3 ...
PLANNING: Ended: Sep 9, 2021 22:41:06
PLANNING: Elapsed: 0s
PLANNING: The following items will not be transferred, because there's not enough space in the target disks:
PLANNING: <sharename>/file2
PLANNING: Planning Finished

 

I can transfer the files one at a time, but if I try to select two or more files, unBALANCE refuses to move after the first "because there's not enough space in the target disks:". No disk is indicated.

 

Below a snapshot of the array :
array.thumb.jpg.475ff76d169cd14ab120dbe9f2b3b717.jpg

 

The share is set to include disk2 to disk7. As can be seen, disk4 to disk7 haven't 100GB free, but of course I only target disk3 in the move operation as the log shows. Also,  I've never used unBALANCE before on such big files, maybe it's another possible cause.

 

 

Any help or advice would be appreciated.

 

Edited by Gnomuz
typos
Link to comment
On 9/9/2021 at 4:02 PM, Gnomuz said:

I can transfer the files one at a time, but if I try to select two or more files, unBALANCE refuses to move after the first "because there's not enough space in the target disks:". No disk is indicated.

Hi, I've been busy with work.

Maybe you've already solved it, if not I'd take a look at the logs, there's a section where it shows free space by disks and allocations, perhaps that offers some clue

Edited by jbrodriguez
Link to comment

Thanks for answering, but sorry, I didn't solve it in the meantime. I tried to set the reserved space to the minimum with no effect.
Here are the logs when I plan for transferring two ~109GB files to a drive (disk3) with 5.2 TB free :
 

I: 2021/09/16 22:15:26 planner.go:70: Running scatter planner ...
I: 2021/09/16 22:15:26 planner.go:84: scatterPlan:source:(/mnt/disk2)
I: 2021/09/16 22:15:26 planner.go:86: scatterPlan:dest:(/mnt/disk3)
I: 2021/09/16 22:15:26 planner.go:520: planner:array(8 disks):blockSize(4096)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk1):fs(xfs):size(3998833471488):free(2304975196160):blocksTotal(976277703):blocksFree(562738085)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk2):fs(xfs):size(3998833471488):free(509188435968):blocksTotal(976277703):blocksFree(124313583)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk3):fs(xfs):size(13998382592000):free(5195027685376):blocksTotal(3417573875):blocksFree(1268317306)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk4):fs(xfs):size(13998382592000):free(30423252992):blocksTotal(3417573875):blocksFree(7427552)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk5):fs(xfs):size(13998382592000):free(30410973184):blocksTotal(3417573875):blocksFree(7424554)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk6):fs(xfs):size(13998382592000):free(30273011712):blocksTotal(3417573875):blocksFree(7390872)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk7):fs(xfs):size(13998382592000):free(29930565632):blocksTotal(3417573875):blocksFree(7307267)
I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/cache):fs(xfs):size(511859089408):free(246631960576):blocksTotal(124965598):blocksFree(60212881)
I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot)
W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1)
I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.83 GB)
I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot)
W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1)
I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.84 GB)
I: 2021/09/16 22:15:26 planner.go:110: scatterPlan:items(2)
I: 2021/09/16 22:15:26 planner.go:113: scatterPlan:found(/mnt/disk2/Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot):size(108828032623)
I: 2021/09/16 22:15:26 planner.go:113: scatterPlan:found(/mnt/disk2/Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot):size(108835523381)
I: 2021/09/16 22:15:26 planner.go:120: scatterPlan:issues:owner(0),group(0),folder(0),file(0)
I: 2021/09/16 22:15:26 planner.go:129: scatterPlan:Trying to allocate items to disk3 ...
I: 2021/09/16 22:15:26 planner.go:134: scatterPlan:ItemsLeft(2):ReservedSpace(536870912)
I: 2021/09/16 22:15:26 planner.go:463: scatterPlan:1 items will be transferred.
I: 2021/09/16 22:15:26 planner.go:465: scatterPlan:willBeTransferred(Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot)
I: 2021/09/16 22:15:26 planner.go:473: scatterPlan:1 items will NOT be transferred.
I: 2021/09/16 22:15:26 planner.go:479: scatterPlan:notTransferred(Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot)
I: 2021/09/16 22:15:26 planner.go:488: scatterPlan:ItemsLeft(1)
I: 2021/09/16 22:15:26 planner.go:489: scatterPlan:Listing (8) disks ...
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk1):no-items:currentFree(2.30 TB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk2):no-items:currentFree(509.19 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:492: =========================================================
I: 2021/09/16 22:15:26 planner.go:493: disk(/mnt/disk3):items(1)-(108.83 GB):currentFree(5.20 TB)-plannedFree(5.09 TB)
I: 2021/09/16 22:15:26 planner.go:494: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:497: [108.83 GB] Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot
I: 2021/09/16 22:15:26 planner.go:500: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:501:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk4):no-items:currentFree(30.42 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk5):no-items:currentFree(30.41 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk6):no-items:currentFree(30.27 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk7):no-items:currentFree(29.93 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:503: =========================================================
I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/cache):no-items:currentFree(246.63 GB)
I: 2021/09/16 22:15:26 planner.go:505: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:506: ---------------------------------------------------------
I: 2021/09/16 22:15:26 planner.go:507:
I: 2021/09/16 22:15:26 planner.go:511: =========================================================
I: 2021/09/16 22:15:26 planner.go:512: Bytes To Transfer: 108.83 GB
I: 2021/09/16 22:15:26 planner.go:513: ---------------------------------------------------------

It clearly states that the second file will NOT be transferred, without further obvious explanation.
I suspect the problem is due to the huge size of the files, as I managed to transfer multiple smaller files from disk2 to disk3 without any issue. Maybe some kind of overflow in an intermediate variable ?

Thanks in advance for your support.

Link to comment
On 9/16/2021 at 3:29 PM, Gnomuz said:

Thanks in advance for your support.

Sure thing, thanks for posting the log.

 

Ok, actually there's an issue with one of your files

 

I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(xxxx/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot)
W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1)

 

That exit status 1 means there's something odd about that file, not sure what it is.

 

Permission? timestamp? corrupt file ?

 

Just having a summary look, that seems to be the issue

 

Link to comment

Thanks for the feedback, I had noticed this "not-available" exit code but didn't pay attention it was a 'Warning" log entry.

 

So I understand the problem would be in the scan phase. If I may, the warning line you point out is related with the first file, the one that unBALANCE will move happily :
 

I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(XXXX/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot)
W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1)
I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.83 GB)
I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(XXXX/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot)
W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1)
I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.84 GB)
I: 2021/09/16 22:15:26 planner.go:110: scatterPlan:items(2)

The warnings are the same on both files, but in the end the result of the planning phase is that the first file is transferred, the second is not. And it's the same if I select 3 of them or more, only the first one is selected for transfer as expected, the next ones "will NOT be transferred", same warning on all files including the first one in the scan phase.

 

Permissions and timestamps seem fine 
 

-rw-rw-rw- 1 nobody users 102G Jul 26 15:36 plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot
-rw-rw-rw- 1 nobody users 102G Jul 26 16:33 plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot

 

As for a file corruption, these are chia plots as you have seen. There are chia specific tools which check the integrity of the plots and I'm positively sure these are not corrupted at application level, a fortiori at block level.


Other similar transfers worked just fine with unBALANCE on smaller files. I also transferred manually a few chia plots with CLI from disk2 to disk3 using the exact same rsync command as the one generated by unBALANCE, it worked like a charm. So I really suspect the unitary size of the files somehow raises an exception in the code behind the "scan" phase, which prevents the transfer of more than one file at a time.

 

Should you need any further information from me, do not hesitate.

Edited by Gnomuz
Link to comment
  • 3 weeks later...

I can't access the webui for unbalance.

When I click the link to access the web gui it redirects to 10.0.0.20:6238 and I get an error from chrome:

 

10.0.0.20 normally uses encryption to protect your information. When Chrome tried to connect to 10.0.0.20 this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be 10.0.0.20, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Chrome stopped the connection before any data was exchanged.

You cannot visit 10.0.0.20 right now because the website sent scrambled credentials that Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.


Normally you can just go on anyways, but now it won't let me do anything with it. Tried safari too and it asked for a login, tried my unraid login and it didn't work.

What am I doing wrong? Thank you!

Link to comment
  • 4 weeks later...
On 7/25/2021 at 2:21 PM, jbrodriguez said:

It doesn't support multiple cache pools, as far as I can tell.

 

If you have multiple cache pools,  would you mind following the instructions in https://github.com/jbrodriguez/controlr-support ?

This would allow me to check out how the multiple pool drives are named/defined

this explains why i cant see my 2 cache drives. 
I wanted to use unbalance to swap out one of the cache drives. 

Should i wait for an update? 

  • Thanks 1
Link to comment

I ran a few gather jobs.

 

I am getting a yellow check sign next to some folders. What does that mean?

Does it mean the file or folder was not copied?

Does it mean that my data is corrupt?

 

I checked the logs but this job was a couple days ago so the logs dont show the data for it. 

I tried opening some of the data and the files are opening fine from a windows pc.

 

 

 

Link to comment
On 10/30/2021 at 6:28 AM, okkies said:

this explains why i cant see my 2 cache drives. 
I wanted to use unbalance to swap out one of the cache drives. 

Should i wait for an update? 


I am fairly new to UnRAID, but in my research I have stumbled across something that Spaceinvader One posted on his youtube channel a few years back. There may be a better way since then, but this may get you going sooner than later.

The process goes through describing adding a cache pool, but later in the video it goes over how to upgrade or replace a cache drive leveraging Krusader.

Here's where they start discussing upgrading or replacing the cache drive (time 8:40):
 

 

Link to comment
  • 5 weeks later...

I've got an issue where this plugin is unnecessarily waking up spun down disks almost immediately after they've been spun down.  I finally traced the issue to Unbalance by scrolling through htop.  I'm unsure why this happens, but I'm definitely leaving this plugin disabled for now.  Disabling the plugin has fixed the issue.

 

Dec 1 14:24:22 Galactica emhttpd: spinning down /dev/sdd
Dec 1 14:25:29 Galactica emhttpd: read SMART /dev/sdd

Link to comment

Having some hardlink troubles. I added '-H' to Unbalance's flags (so the field now reads '-XH', the X is a default). Then I moved a folder containing hardlinked files to another drive. I hoped the system would know that these files were hardlinked, find the other copies and copy them over as well, all while keeping the directory structures that hold the files intact.

 

That unfortunately didn't happen. The initial files were moved just fine, but the other hardlinked files are still on the original disk. So the end result is the same files on two drives and the space savings of hardlinking temporarily undone.

 

What am I doing wrong here? Any help grealy appreciated, thanks.

Link to comment
On 12/8/2021 at 2:09 PM, thatsthefrickenlightning said:

Having some hardlink troubles. I added '-H' to Unbalance's flags (so the field now reads '-XH', the X is a default). Then I moved a folder containing hardlinked files to another drive. I hoped the system would know that these files were hardlinked, find the other copies and copy them over as well, all while keeping the directory structures that hold the files intact.

 

That unfortunately didn't happen. The initial files were moved just fine, but the other hardlinked files are still on the original disk. So the end result is the same files on two drives and the space savings of hardlinking temporarily undone.

 

What am I doing wrong here? Any help grealy appreciated, thanks.

 

If you're moving a folder named "/mnt/user/media" that has hardlinks to "/mnt/user/downloads", then Unbalance (which uses rsync) will not touch the "/mnt/user/downloads" files, even if there are hardlinks to files in there. The preservation of hardlinks will only happen for files within "/mnt/user/media". Also, hardlinks in general act pretty weird on Unraid and they're not recommended, iirc.

Link to comment
  • 2 weeks later...

Anybody else seeing issues launching the webUI in Chrome?
 

192.168.20.230 normally uses encryption to protect your information. When Chrome tried to connect to 192.168.20.230 this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be 192.168.20.230, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Chrome stopped the connection before any data was exchanged.

You cannot visit 192.168.20.230 right now because the website sent scrambled credentials that Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.

 

I also noticed that when you try to open the ui, it forwards to port 6238 not the listed 6237 on the settings page.

 

UI opens after SSL exception in Firefox with no issues.  Still forwards to 6238 though.  Not sure if this matters or not, but something I noticed.

 

 

Link to comment

First, a huge heartfelt thank you for this awesome utility.

 

I'm trying to clean up my share data so that each share is no longer scattered across numerous drives (so that "movies", for example, is limited to 3 drives and unraid only has to spin those drives up when movies are being accessed). 

 

I'm running into a hurdle using the Gather function however; when I get to the screen to select the Target drives to gather the share data on to, only a few drives are showing up (and they don't happen to be the drives that I want to target).  I'm guessing I'm probably just missing something simple.   I've disabled all of my dockers and VMs so nothing should be accessing drives.  I've tried spinning up all the drives before hand as well.   

 

I have been able to work around this by slowly piecemealing the transfer by using Scatter to move share data disk-by-disk, but that's obviously a much more manual process.

 

Any ideas?

 

Thanks in advance!

Link to comment
On 12/21/2021 at 1:16 PM, zombie said:

Anybody else seeing issues launching the webUI in Chrome?
 

192.168.20.230 normally uses encryption to protect your information. When Chrome tried to connect to 192.168.20.230 this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be 192.168.20.230, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Chrome stopped the connection before any data was exchanged.

You cannot visit 192.168.20.230 right now because the website sent scrambled credentials that Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.

 

I also noticed that when you try to open the ui, it forwards to port 6238 not the listed 6237 on the settings page.

 

UI opens after SSL exception in Firefox with no issues.  Still forwards to 6238 though.  Not sure if this matters or not, but something I noticed.

 

yes same issue here

 

Link to comment
  • jbrodriguez changed the title to [Plugin] unbalanced

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.