[DOCKER] unBALANCE


Recommended Posts

I have had this issue for a while and wanted to try and figure out why i am getting this error.  I have set up all of my directories and a few of the drives will move correctly.  however some give me the error message "undefined is not an object (evaluating 'data.condition')  Followed up with XHR failed for calculateBestFit.  Any ideas what is wrong.

error.jpg.3d9f8df20b06db693c8b5e723a632e1a.jpg

config.txt

unbalance.txt

Link to comment
  • Replies 234
  • Created
  • Last Reply

Top Posters In This Topic

I have had this issue for a while and wanted to try and figure out why i am getting this error.  I have set up all of my directories and a few of the drives will move correctly.  however some give me the error message "undefined is not an object (evaluating 'data.condition')  Followed up with XHR failed for calculateBestFit.  Any ideas what is wrong.

 

Hi derbtv,

 

From the log I see that it crashes trying to read some folder located at /mnt/disk8/ISOImages.

 

If you're ok with dropping to the command line, would you try these two commands:

 

ls -al /mnt/disk8/ISOImages

to check ownership/permissions. Ownership should be nobody:users and permissions should allow the file/folder to be read

 

du -bs /mnt/disk8/ISOImages/*

that's the command the app is crashing on, so it would be nice to see what it's actually doing

Link to comment

I have had this issue for a while and wanted to try and figure out why i am getting this error.  I have set up all of my directories and a few of the drives will move correctly.  however some give me the error message "undefined is not an object (evaluating 'data.condition')  Followed up with XHR failed for calculateBestFit.  Any ideas what is wrong.

 

Hi derbtv,

 

From the log I see that it crashes trying to read some folder located at /mnt/disk8/ISOImages.

 

If you're ok with dropping to the command line, would you try these two commands:

 

ls -al /mnt/disk8/ISOImages

to check ownership/permissions. Ownership should be nobody:users and permissions should allow the file/folder to be read

 

du -bs /mnt/disk8/ISOImages/*

that's the command the app is crashing on, so it would be nice to see what it's actually doing

ls -al /mnt/disk8/ISOImages[/code

drwxrwxrwx  3 nobody users  80 Feb 11  2014 ./

drwxrwxrwx 13 nobody users 312 May 10 09:13 ../

drwxrwxrwx  3 nobody users 112 May 10 19:40 .TemporaryItems/

 

/mnt/disk8/ISOImages/*

du: cannot access ‘/mnt/disk8/ISOImages/*’: No such file or directory

 

thanks

 

 

 

 

Link to comment

Welp we have it figured out.  It is the freaking temp files created by my mac.  I have deleted all of them and it is working as expected..  Thanks I do have another question.. When some drives it tells me that it is going to completely free up and others it only clears up a portion.  Any ideas how to tell what it is not going to move and why?

Link to comment

Welp we have it figured out.  It is the freaking temp files created by my mac.  I have deleted all of them and it is working as expected..  Thanks I do have another question.. When some drives it tells me that it is going to completely free up and others it only clears up a portion.  Any ideas how to tell what it is not going to move and why?

 

It's good to know that the temp files were causing this issue. I'll review the code and check how I can make it work better in this scenario.

 

About the other question, internally, the app works more or less like this:

- When you select a disk as source, it builds a list of all the files and folders it contains.

- It then sorts the other disks by remaining space, and allocates as many files/folders on each disk, as space allows

- Sometimes there isn't enough space in the other disks to allocate more folders, so they are not moved

 

But you have a point. It would be good to know which folders it will not move.

 

It's not difficult to add, I'll get it in the next update.

Link to comment

Could I suggest the ability to browse for eligible to move folders, in the same fashion that you can browse paths when creating a new docker and choosing mount paths?

 

Hi glave, I have to think about that a bit.

 

Like a dropdown in the settings page, so that you choose folders, rather than having to input them, right ?

 

Let me figure it out, I'll get to it during the week.

Link to comment
  • 2 weeks later...

I'm in the process of converting to XFS from RFS.

 

Unbalance (through docker) is working perfectly... mostly :)

 

Here's what I did:

- Unbalance data off the drive

- Stop Array

- Change Filesystem to XFS on empty drive

- Restart Array

- Format drive

- Open Unbalance to do next drive

 

The problem is that when I opened Unbalance for the second time it showed the empty drive as 0 free and 0 used.  From telnet everything looks fine, as well as from the Main page from Unraid.

 

I was able to rectify the issue by shutting down the array and restarting it again.

 

Obviously not a big issue even if it's reproducible and I have more drives to do so I'll be more careful with my steps and document it better. 

 

Thanks for the great product.

Link to comment

I'm in the process of converting to XFS from RFS.

 

Unbalance (through docker) is working perfectly... mostly :)

 

Here's what I did:

- Unbalance data off the drive

- Stop Array

- Change Filesystem to XFS on empty drive

- Restart Array

- Format drive

- Open Unbalance to do next drive

 

The problem is that when I opened Unbalance for the second time it showed the empty drive as 0 free and 0 used.  From telnet everything looks fine, as well as from the Main page from Unraid.

 

I was able to rectify the issue by shutting down the array and restarting it again.

 

Obviously not a big issue even if it's reproducible and I have more drives to do so I'll be more careful with my steps and document it better. 

 

Thanks for the great product.

 

Hi Frayedknot, thanks for the kind comments !

 

I'm not exactly why this would happen.

 

After the folder move is completed, it re-reads the free space of each disk, so the recently emptied drive should show up as such.

 

On the other hand, I remember now that df linux command take a bit to show the very latest info (depending on which processes are holding on to file references)

 

This is normal behaviour and happens to me in another server I have.

 

So, a possible scenario is :

- unBALANCE moved data off the drive

* Inmediately the information is refreshed, but it still shows the disk as used

- Stop array, etc.

 

Did you leave the unBALANCE docker running? Or you restarted it ? Or just refreshed the browser page ?

Link to comment

I'm in the process of converting to XFS from RFS.

 

Unbalance (through docker) is working perfectly... mostly :)

 

Here's what I did:

- Unbalance data off the drive

- Stop Array

- Change Filesystem to XFS on empty drive

- Restart Array

- Format drive

- Open Unbalance to do next drive

 

The problem is that when I opened Unbalance for the second time it showed the empty drive as 0 free and 0 used.  From telnet everything looks fine, as well as from the Main page from Unraid.

 

I was able to rectify the issue by shutting down the array and restarting it again.

 

Obviously not a big issue even if it's reproducible and I have more drives to do so I'll be more careful with my steps and document it better. 

 

Thanks for the great product.

 

Hi Frayedknot, thanks for the kind comments !

 

I'm not exactly why this would happen.

 

After the folder move is completed, it re-reads the free space of each disk, so the recently emptied drive should show up as such.

 

On the other hand, I remember now that df linux command take a bit to show the very latest info (depending on which processes are holding on to file references)

 

This is normal behaviour and happens to me in another server I have.

 

So, a possible scenario is :

- unBALANCE moved data off the drive

* Inmediately the information is refreshed, but it still shows the disk as used

- Stop array, etc.

 

Did you leave the unBALANCE docker running? Or you restarted it ? Or just refreshed the browser page ?

 

Well.. it happened again, and my steps were accurate.  I didn't have the docker running before and even just in case I stopped unbalance docker and restarted it.

 

Unbalance seems to think it's 0 bytes (on a newly formatted drive - i assume)

 

Here's the screen shots of my Unraid Main Page, the unbalance before (after the format with the issue happening) and after the array was restarted when it shows properly.  (wow.. 192 k limit eh.. was hard to add 3 screen shots; hopefully a zip file is ok)

 

Oh and apparently I'm running the current version of Unbalance and docker. (unBALANCE v0.7.3-157.f2ebeef)

Unbalance_Screenshots.zip

Link to comment

...

Well.. it happened again, and my steps were accurate.  I didn't have the docker running before and even just in case I stopped unbalance docker and restarted it.

 

Unbalance seems to think it's 0 bytes (on a newly formatted drive - i assume)

 

Here's the screen shots of my Unraid Main Page, the unbalance before (after the format with the issue happening) and after the array was restarted when it shows properly.  (wow.. 192 k limit eh.. was hard to add 3 screen shots; hopefully a zip file is ok)

 

Oh and apparently I'm running the current version of Unbalance and docker. (unBALANCE v0.7.3-157.f2ebeef)

 

Thanks Fraynedknot, yes it really seems something related to the freshly formatted disk.

 

If you still have one more of these cycles left, cold you please run

 

df --block-size=1 /mnt/disk*

 

from the command line, right around the time unBALANCE is showing 0/0 ?

 

I'll try to replicate in my test setup.

Link to comment

...

Well.. it happened again, and my steps were accurate.  I didn't have the docker running before and even just in case I stopped unbalance docker and restarted it.

 

Unbalance seems to think it's 0 bytes (on a newly formatted drive - i assume)

 

Here's the screen shots of my Unraid Main Page, the unbalance before (after the format with the issue happening) and after the array was restarted when it shows properly.  (wow.. 192 k limit eh.. was hard to add 3 screen shots; hopefully a zip file is ok)

 

Oh and apparently I'm running the current version of Unbalance and docker. (unBALANCE v0.7.3-157.f2ebeef)

 

Thanks Fraynedknot, yes it really seems something related to the freshly formatted disk.

 

If you still have one more of these cycles left, cold you please run

 

df --block-size=1 /mnt/disk*

 

from the command line, right around the time unBALANCE is showing 0/0 ?

 

I'll try to replicate in my test setup.

 

No change; by the way I also just upgraded to 6.0 version before this format in case that might have some impact on this.  (So the current release version also has the same issue)

 

BEFORE (after data moved off)

root@Tower:~# df --block-size=1 /mnt/disk4

Filesystem        1B-blocks    Used    Available Use% Mounted on

/dev/md4      2000337846272 33628160 2000304218112  1% /mnt/disk4

 

AFTER FORMAT TO XFS

root@Tower:/mnt# df --block-size=1 /mnt/disk4

Filesystem        1B-blocks    Used    Available Use% Mounted on

/dev/md4      1999422144512 33751040 1999388393472  1% /mnt/disk4

 

I should also mention that my unraid web stopped working after the last unbalance; but i'm sure that wasn't unbalanced fault. (Although I was monitoring unbalance from two separate computers, and it stopped working right at the end of the process.  I got the finished and then no more worky with unraid web menu).  I don't expect anything from this, but I figured I'd let you know in case i'm not the only one that experienced it.

 

I'm not going to restart my array right now in case you want to try something else; and I'll also try a restart of the unbalance docker in an hour or 3 to see if anything changes that way too.

Link to comment

...

BEFORE (after data moved off)

root@Tower:~# df --block-size=1 /mnt/disk4

Filesystem        1B-blocks    Used    Available Use% Mounted on

/dev/md4      2000337846272 33628160 2000304218112  1% /mnt/disk4

 

AFTER FORMAT TO XFS

root@Tower:/mnt# df --block-size=1 /mnt/disk4

Filesystem        1B-blocks    Used    Available Use% Mounted on

/dev/md4      1999422144512 33751040 1999388393472  1% /mnt/disk4

 

I should also mention that my unraid web stopped working after the last unbalance; but i'm sure that wasn't unbalanced fault. (Although I was monitoring unbalance from two separate computers, and it stopped working right at the end of the process.  I got the finished and then no more worky with unraid web menu).  I don't expect anything from this, but I figured I'd let you know in case i'm not the only one that experienced it.

 

I'm not going to restart my array right now in case you want to try something else; and I'll also try a restart of the unbalance docker in an hour or 3 to see if anything changes that way too.

 

I was able to replicate it. I'm getting 0/0 in free/used.

 

The quick fix that solves it for me is to restart the docker container, from the unRAID Docker GUI.

 

The bottom line is that formatting the drive makes it "disappear" from within the docker container.

 

It shows up but with a different filesystem (rootfs) and mounted on the root (/mnt), so unBALANCE doesn't see it.

 

I'll check how to make sure that the /mnt volume is refreshed before collecting data.

 

Let me know if restarting the container itself, works for you.

 

Link to comment

I'm also hoping you have autostart enabled for the unBALANCE docker.

 

This would completely validate what's happening.

 

- Stop the array

- Format drive

- Start the array

It starts the unBALANCE container automatically, but the disk is still not formatted.

The container reads it as owned by root as mentioned before (filesystem: rootfs, mounted on: /mnt)

- Format the drive

- Open unBALANCE

At this point, unBALANCE is doomed :)

 

But if you restart the container here, it will read the right data.

 

Restarting the array also works because it stops/starts the containers, and this time the drive is formatted so it can be read properly.

 

I'm looking at some alternatives, but none is very promising to be honest.

 

In any case, let me know if restarting the container does it for you.

Link to comment

I'm also hoping you have autostart enabled for the unBALANCE docker.

 

This would completely validate what's happening.

 

- Stop the array

- Format drive

- Start the array

It starts the unBALANCE container automatically, but the disk is still not formatted.

The container reads it as owned by root as mentioned before (filesystem: rootfs, mounted on: /mnt)

- Format the drive

- Open unBALANCE

At this point, unBALANCE is doomed :)

 

But if you restart the container here, it will read the right data.

 

Restarting the array also works because it stops/starts the containers, and this time the drive is formatted so it can be read properly.

 

I'm looking at some alternatives, but none is very promising to be honest.

 

In any case, let me know if restarting the container does it for you.

 

Ya, that works for me.

 

I feel bad for finding an issue with this awesome product.    It isn't a big deal, and the work around is reasonable.

 

I'm glad you replicated the issue, cause I don't have any more drives to do now.

 

Thanks again for making unbalance.

 

With the use of Unbalance I was able to convert from RFS to XFS on all drives without rebooting, putting any risk into the integrity of the data or any down time.

 

It took a long enough time, and I probably stressed my parity drive during the process but it worked beautifully.

 

Link to comment

...

Ya, that works for me.

 

I feel bad for finding an issue with this awesome product.    It isn't a big deal, and the work around is reasonable.

 

I'm glad you replicated the issue, cause I don't have any more drives to do now.

 

Thanks again for making unbalance.

 

With the use of Unbalance I was able to convert from RFS to XFS on all drives without rebooting, putting any risk into the integrity of the data or any down time.

 

It took a long enough time, and I probably stressed my parity drive during the process but it worked beautifully.

 

Thank you Frayedknot, I'm glad it was useful !

 

I'll keep looking into some alternatives to this issue, if I can't find anything I'll add a note.

Link to comment

I seem to be doing something wrong, but don't know exactly what;

 

I: 2015/06/19 11:02:14 core.go:220: =========================================================
I: 2015/06/19 11:02:14 core.go:221: Results for /mnt/disk7
I: 2015/06/19 11:02:14 core.go:222: Original Free Space: 45.8G
I: 2015/06/19 11:02:14 core.go:223: Final Free Space: 45.8G
I: 2015/06/19 11:02:14 core.go:224: Gained Space: 0
I: 2015/06/19 11:02:14 core.go:225: Bytes To Move: 0
I: 2015/06/19 11:02:14 core.go:226: ---------------------------------------------------------
I: 2015/06/19 11:02:14 unraid.go:224: Unraid Box Condition: &{NumDisks:19 NumProtected:19 Synced:2015-05-24 03:58:45 +0100 BST SyncErrs:0 Resync:0 ResyncPrcnt:0 ResyncPos:0 State:STARTED Size:42002025897984 Free:3412631412736 NewFree:3412631412736}
I: 2015/06/19 11:02:14 unraid.go:225: Unraid Box SourceDiskName: /mnt/disk7
I: 2015/06/19 11:02:14 unraid.go:226: Unraid Box BytesToMove: 0
I: 2015/06/19 11:02:14 unraid.go:237: Id(1); Name(md1); Path(/mnt/disk1); Device(sdk), Free(91.4G); NewFree(91.4G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2HGJ1BZ809431); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(2); Name(md2); Path(/mnt/disk2); Device(sdh), Free(174.1G); NewFree(174.1G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2H7J9BB314796); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(3); Name(md3); Path(/mnt/disk3); Device(sdm), Free(108.6G); NewFree(108.6G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2HGJ1AZ801549); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(4); Name(md4); Path(/mnt/disk4); Device(sde), Free(168.7G); NewFree(168.7G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2HGJ1BZ809429); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(5); Name(md5); Path(/mnt/disk5); Device(sdp), Free(173.2G); NewFree(173.2G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2H7J9BB314797); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(6); Name(md6); Path(/mnt/disk6); Device(sdg), Free(173.6G); NewFree(173.6G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2HGJ1BZ809518); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(7); Name(md7); Path(/mnt/disk7); Device(sdn), Free(45.8G); NewFree(45.8G); Size(1.8T); Serial(WDC_WD20EARS-00MVWB0_WD-WCAZA3122490); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(; Name(md8); Path(/mnt/disk8); Device(sds), Free(275.4G); NewFree(275.4G); Size(1.8T); Serial(WDC_WD20EARS-00J99B0_WD-WCAWZ0732007); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(9); Name(md9); Path(/mnt/disk9); Device(sdu), Free(114.4G); NewFree(114.4G); Size(1.8T); Serial(WDC_WD20EARX-00PASB0_WD-WMAZA6830041); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(10); Name(md10); Path(/mnt/disk10); Device(sdq), Free(87.1G); NewFree(87.1G); Size(1.8T); Serial(Hitachi_HDS5C3020ALA632_ML0230FA0LKK5D); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(11); Name(md11); Path(/mnt/disk11); Device(sdo), Free(144.3G); NewFree(144.3G); Size(1.8T); Serial(SAMSUNG_HD204UI_S2HGJ1AZ801624); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(12); Name(md12); Path(/mnt/disk12); Device(sdf), Free(190G); NewFree(190G); Size(2.7T); Serial(WDC_WD30EFRX-68EUZN0_WD-WCC4N0964192); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(13); Name(md13); Path(/mnt/disk13); Device(sdc), Free(250G); NewFree(250G); Size(2.7T); Serial(WDC_WD30EFRX-68AX9N0_WD-WMC1T0907311); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(14); Name(md14); Path(/mnt/disk14); Device(sdb), Free(347.9G); NewFree(347.9G); Size(2.7T); Serial(WDC_WD30EFRX-68EUZN0_WD-WMC4N1345473); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(15); Name(md15); Path(/mnt/disk15); Device(sdj), Free(261.9G); NewFree(261.9G); Size(2.7T); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0826380); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(16); Name(md16); Path(/mnt/disk16); Device(sdi), Free(174.8G); NewFree(174.8G); Size(2.7T); Serial(WDC_WD30EFRX-68AX9N0_WD-WMC1T1079164); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(17); Name(md17); Path(/mnt/disk17); Device(sdr), Free(183.3G); NewFree(183.3G); Size(1.8T); Serial(Hitachi_HDS5C3020ALA632_ML0220F3112GAD); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 unraid.go:237: Id(18); Name(md18); Path(/mnt/disk18); Device(sdl), Free(213.6G); NewFree(213.6G); Size(2.7T); Serial(WDC_WD30EZRX-00MMMB0_WD-WCAWZ1895639); Status(DISK_OK); Bin()
I: 2015/06/19 11:02:14 core.go:232: calculateBestFit:End:srcDisk(/mnt/disk7)

 

Folders Selected

# Folder

/mnt/user/movies

/mnt/user/series

 

and from ssh I show this on the drive

 

root@Tower:/mnt/disk7# ls -ltr

total 1

drwxrwxrwx 11 nobody users 288 Jul 18  2011 movies/

drwxrwxrwx 19 nobody users 624 May  1 23:13 series/

 

but it says nothing to do

 

Link to comment

...

Folders Selected

# Folder

/mnt/user/movies

/mnt/user/series

 

...

 

Do you mean you entered movies and series as folders or actually /mnt/user/movies and /mnt/user/series ?

 

The former would be the way to enter them (I'm working on enabling a dropdown as suggested by glave, so this would become a non-issue).

 

If that's not the case, do you see in the logs any lines similar to this

 

calculateBestFit:total(xxx):toBeMoved:Path(yyy); Size(zzz)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.