Jump to content
jbrodriguez

[Plug-In] unBALANCE

1295 posts in this topic Last Reply

Recommended Posts

If you stop unBalance midway through it's process, you end up with duplicates of those files it's already moved, correct?

I queued up 4TB of data to move and that was a mistake. It's been running now for 24hrs and I want to stop it, but it's already copied nearly 2TB of data. If I kill it now, that data is going to be duplicated on 2 drives, right?

 

Share this post


Link to post
2 minutes ago, jbrodriguez said:

yes

well then, I guess we'll let this one ride out. 24hrs to go! :)

 

Share this post


Link to post

why does unbalance, when using "scatter", fill up disks to 99,9%? 

 

This is pretty unclever bc of lower speeds and annoying warnings from unraid... any way to prevent that?

Share this post


Link to post
Posted (edited)
On 8/3/2019 at 11:03 AM, nuhll said:

why does unbalance, when using "scatter", fill up disks to 99,9%? 

 

This is pretty unclever bc of lower speeds and annoying warnings from unraid... any way to prevent that?

I have not had this issue yet. In the options I set 50GB as a minimum space available. So I'm in the process of moving content from four 2TB drives so I can remove them from the array in my unRAID2 setup. The lowest amount of free space I've seen the 3TB and 2TB drives get to in the array is 53GB.

 

I'll see what happens later. I'm halfway through my third drive. When I finish I should hopefully have 19 array drives with around 50GB of free space for each drive.

 

My unRAID 2 is my oldest unRAID setup. I set it up back in 2011. And I used mostly 2TB drives and later a few 3TB drives. But the 2TB drives are ten years old now. And I have not added content to it in many years. It's filled with BD ISOs. Most of which were created when I used a WIndows Home Server back in 2009 through 2011. Then I switched to using unRAID and transferred my BD ISOs to it. I recently dumped the power hogging 2009 home made PC I used and installed an old HP N40L microserver in it's place. But I'm using four, 5-bay external enclosures for the array drives in addition to the drives in the HP microserver. So I want to get the external enclosures down to 4 drives each to speed up parity checks.

 

53TB unRAID1a--53TB unRAID2--76TB unRAID3

Edited by aaronwt

Share this post


Link to post
Posted (edited)

LOL, thanks. I never touched that setting it was set to 512mb. MANY THANKS.


Ive set it now to 10%.... xD

 

pretty low standard value nowadays

Edited by nuhll

Share this post


Link to post
5 hours ago, nuhll said:

LOL, thanks. I never touched that setting it was set to 512mb. MANY THANKS.


Ive set it now to 10%.... xD

 

pretty low standard value nowadays

LOL yeah. Never saw that option either! Updated and it seems to work perfect! 

Should have guessed it was already there :P
 

Share this post


Link to post

Now, if unBalance could add a "find duplicate files" option that scouring each physical HDD for duplicate files, that would be the icing on the cake

 

Share this post


Link to post

Thanks for your comments, jpotrz  !

 

There's a docker (diskover), that among other things, finds duplicates

 

 

Share this post


Link to post
20 minutes ago, jbrodriguez said:

Thanks for your comments, jpotrz  !

 

There's a docker (diskover), that among other things, finds duplicates

 

Ohhhh I hadn't heard of that one. 

I installed it, but can't access the webgui it seems. Guess I gotta figure it out.

Thanks!

 

Share this post


Link to post
Posted (edited)

yeah i also ahve dozen of duplicates created by unbalance. (it doesnt matter whos fault it is :P) Tried with dupeguru, but this is just .. bad to use.. lets say it thta way. 

 

Ill try diskover tho.

 

edit:

Just puleld it and it doesnt work.  

 

edit2: Yeah, way to complicated for the thing i want to do, LoL. Need to install elasticsearch and redis (what ever that is)

Edited by nuhll

Share this post


Link to post
2 hours ago, nuhll said:

yeah i also ahve dozen of duplicates created by unbalance. (it doesnt matter whos fault it is :P) Tried with dupeguru, but this is just .. bad to use.. lets say it thta way. 

 

Ill try diskover tho.

 

edit:

Just puleld it and it doesnt work.  

 

edit2: Yeah, way to complicated for the thing i want to do, LoL. Need to install elasticsearch and redis (what ever that is)

Yeah it got too complicated for me too. Seems like there should be an easier solution. I found a script to run, but I'm too stupid to even do that.

 

Share this post


Link to post
Posted (edited)

Yeah, the solution seems pretty easy, just remove every file with same filename/size except one on /mnt/disk*/

but i have no coding expierince.

 

 

What script you found, maybe i can edit it.

Edited by nuhll

Share this post


Link to post
Posted (edited)

I found something easy and cool

fdupes!

https://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/

 

You can install it via nerd plugin, currently tryin it on /mnt/disk*/ lets see what it throws out 

root@Unraid-Server:~# fdupes -r /mnt/disk*/
Progress [667/509752] 0% 

 

seems to hang, i try now with each directory specified

root@Unraid-Server:~# fdupes -r /mnt/disk1/ /mnt/disk2/ and so on.

 

edit3: It seems liek my system is jsut very slow (only around 40 mb/s) (but im doin currently a partiy rebuild so..) there doesnt seem to be a difference between the disk* and not using * 

 

If that works, it would be the best option in the world, fast, easy and not too complicated.

Edited by nuhll

Share this post


Link to post
1 hour ago, nuhll said:

I found something easy and cool

fdupes!

https://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/

 

You can install it via nerd plugin, currently tryin it on /mnt/disk*/ lets see what it throws out 

root@Unraid-Server:~# fdupes -r /mnt/disk*/
Progress [667/509752] 0% 

 

seems to hang, i try now with each directory specified

root@Unraid-Server:~# fdupes -r /mnt/disk1/ /mnt/disk2/ and so on.

 

edit3: It seems liek my system is jsut very slow (only around 40 mb/s) (but im doin currently a partiy rebuild so..) there doesnt seem to be a difference between the disk* and not using * 

 

If that works, it would be the best option in the world, fast, easy and not too complicated.

Interesting. Installed and running as well

 

Share this post


Link to post
Posted (edited)
1 hour ago, nuhll said:

I found something easy and cool

fdupes!

https://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/

 

You can install it via nerd plugin, currently tryin it on /mnt/disk*/ lets see what it throws out 

root@Unraid-Server:~# fdupes -r /mnt/disk*/
Progress [667/509752] 0% 

 

seems to hang, i try now with each directory specified

root@Unraid-Server:~# fdupes -r /mnt/disk1/ /mnt/disk2/ and so on.

 

edit3: It seems liek my system is jsut very slow (only around 40 mb/s) (but im doin currently a partiy rebuild so..) there doesnt seem to be a difference between the disk* and not using * 

 

If that works, it would be the best option in the world, fast, easy and not too complicated.

I'm not sure if this is what we're looking for... it's not going to find duplicate physical files across different physical HDDs. This is going to find duplicate files within the logical file structure. (I think)

 

Edited by jpotrz

Share this post


Link to post
Posted (edited)
23 hours ago, jpotrz said:

I'm not sure if this is what we're looking for... it's not going to find duplicate physical files across different physical HDDs. This is going to find duplicate files within the logical file structure. (I think)

 

It does compare files.

 

It does compare what you gave him.

 

If you tell him /mnt/disk1 /mnt/disk2 and so on, it compares each disk, if you let it run on /mnt/user/ it wont find duplicates on different drives (bc there no exist)

 

-r does recursive thru all subfolders

 

It worked, didnt took that long actually, im not sure how delete works, but it does what it should do. (ive read that it asks you at each file if which you want to keep, but theres also a automatic mode in which it lets the latest change alive and delete rest dupes) if i find time to test it, ill post here

 

bc i dont want to run such a program on my whole array in delete mode without testing first :)

Edited by nuhll

Share this post


Link to post
7 minutes ago, nuhll said:

It does compare files.

 

It does compare what you gave him.

 

If you tell him /mnt/disk1 /mnt/disk2 and so on, it compares each disk, if you let it run on /mnt/user/ it wont find duplicates on different drives (bc there no exist)

 

-r does recursive thru all subfolders

 

It worked, didnt took that long actually, im not sure how delete works, but it does what it should do. (ive read that it asks you at each file if which you want to keep, but theres also a automatic mode in which it lets the latest change alive and delete rest dupes) if i find time to test it, ill post here

 

bc i dont want to run such a program on my whole array in delete mode without testing first :)

Hmmmm if I use "/mnt/disk1 /mnt/disk2" it will scan/compare those disks? I can enter multiple paths like that?

 

Share this post


Link to post

Yes, Wildcard also works /mnt/disk*/ is the same file count as i did /mnt/disk1/ /mnt/disk2/ and so on... 

Share this post


Link to post
27 minutes ago, nuhll said:

Yes, Wildcard also works /mnt/disk*/ is the same file count as i did /mnt/disk1/ /mnt/disk2/ and so on... 

Running now. At 46% of my 10 disk, 33TB

 

Share this post


Link to post
On 8/8/2019 at 9:03 AM, nuhll said:

Yes, Wildcard also works /mnt/disk*/ is the same file count as i did /mnt/disk1/ /mnt/disk2/ and so on... 

Seemed to work for me. Was able to identify some files on 2 disks. Using unBalance and "Gather" function to clean them up.

Also found some files where I just had duplicates but with different names. Just deleted the duplicates directly.

Is there a switch to only search for certain document types/extension?

 

Share this post


Link to post
Posted (edited)

Hey, i have an idea for a new function for your plugin!

 

What about a "balance" mode?

- u select all drives u want to balance and unbalance trys to fill them to the same % up?

 

Usecase:

if i have a drive failing, i always transfer with gather all data from the drive to all other drives, but then i have always some drives with much more data then all other  (which is relativly risky, when u do a parity rebuild)

Edited by nuhll

Share this post


Link to post

I have a little script that moves directories to the right disks every night.  Despite years of trying I can't get what I wanted automated with a high level \videos\ being the share but then multiple directories depending on type under it.

 

This plugin looks perfect for what I want but I'm wondering if there is a scheduler/cron function.  Ideally I want something that will look for a directory on all disks and then gather it into one disk that I specify for that particular directory.  Is it possible with Unbalance?

Share this post


Link to post

It sounds as if the User Scripts plugin might be more appropriate to your needs?    It has built in scheduler/cron capability for any script it runs. 

Share this post


Link to post
On 8/14/2019 at 6:42 AM, itimpi said:

It sounds as if the User Scripts plugin might be more appropriate to your needs?    It has built in scheduler/cron capability for any script it runs. 

What about my suggestion? :)

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.