BobPhoenix Posted August 19, 2015 Share Posted August 19, 2015 Is this worth the effort? What are the benefits? Also when I add a new drive to the array, will unrRAID give me the option for which file system to format the drive? In a different format than the other array drives? My 5+ year old unraid server has been plugging along with reiserfs array drives. I have not added a new drive to the array since the beginning; I have been up-sizing my array drives (and rebuilding the drive) to increase capacity. I am on the latest 6.1rc release and my cache is btrfs... all dockers, only use basic plugins like swapfile, community apps, nerdtools, shutdown, pre-clear, etc. For me to go to XFS, I would have to transfer close to 20TB of data. Thaks. The reason I am switching to XFS in unRAID v6 is that during the beta of v5 (I think) an error got into the kernel/driver that caused data loss on ReiserFS file system. It was caught and corrected quickly (by lime patch I believe) but ReiserFS is on its way out not sure all linux distros support it now either. That was enough for me to want to change. I migrate a disk at a time most of the time so I'm just using my cold spare to do this switch. It takes a long time but once done I should be good. I've got considerably more than 20TB to convert. Edit for got to quote it. Quote Link to comment
dertbv Posted August 26, 2015 Share Posted August 26, 2015 Just upgraded and now I get "the array is not operational. please start the array first." I did not add A new volume ("/usr/local/sbin"). What would it be mapped to? Quote Link to comment
jbrodriguez Posted August 26, 2015 Author Share Posted August 26, 2015 Just upgraded and now I get "the array is not operational. please start the array first." I did not add A new volume ("/usr/local/sbin"). What would it be mapped to? Hi derbtv, it should be mapped to "/usr/local/sbin". Quote Link to comment
dertbv Posted August 26, 2015 Share Posted August 26, 2015 Just upgraded and now I get "the array is not operational. please start the array first." I did not add A new volume ("/usr/local/sbin"). What would it be mapped to? Hi derbtv, it should be mapped to "/usr/local/sbin". I got it thanks.. Quote Link to comment
jasonhutch Posted September 4, 2015 Share Posted September 4, 2015 I'm trying to move everything from disk4 to disk3 and it keeps locking up. When I restart the docker and start it up again, it runs for a couple minutes, but then freezes again. Any idea? Quote Link to comment
jbrodriguez Posted September 4, 2015 Author Share Posted September 4, 2015 hello jasonhutch, I've seen that happenning when the source disk is kind of failing ... It's running rsync underneath, so if it was me, I'd run a smart test on the source disk, then try a rsync/mc operation, just to rule out external factors. Pls let me know how you see it from your side. Quote Link to comment
MrLondon Posted September 5, 2015 Share Posted September 5, 2015 I seem to be stupid, but what do you mean with you need to mount a new volume to "/usr/local/sbin", where in Unraid are you meant to be doing this? Or where in unBalance? Quote Link to comment
jbrodriguez Posted September 5, 2015 Author Share Posted September 5, 2015 I seem to be stupid, but what do you mean with you need to mount a new volume to "/usr/local/sbin", where in Unraid are you meant to be doing this? Or where in unBalance? Hi MrLondon, it has to be done in the Docker settings. There are a couple of folders you need to specify (Volume Mappings), one of these mappings should be "/usr/local/sbin" to "/usr/local/sbin" Similar to the picture in this post http://lime-technology.com/forum/index.php?topic=39707.msg376803#msg376803 Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 any thoughts on why unbalance is saying the two drives I added most recently have a file system size of 0 and free space of 0 when unraid sees them as 3tb and around 2.8tb of free space? trying to move some files over to them with this docker but it won't since it shows those drives with no free space... thanks in advance... Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 any thoughts on why unbalance is saying the two drives I added most recently have a file system size of 0 and free space of 0 when unraid sees them as 3tb and around 2.8tb of free space? trying to move some files over to them with this docker but it won't since it shows those drives with no free space... thanks in advance... Hi dnoyeb, is it possible that you restart the container ? This has happened in the past when a new drive is formatted while unBALANCE is running. I haven't found a fix for this, as it's related to how mounting drives inside a docker container works. If this doesn't solve your issue, please give me more details (what does the log say, unraid and unbalance versions, etc.) Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 I've restarted the container and even completely removed it. Latest version of unbalance and version 6.1.2 of unraid. This shows up in the log: I: 2015/09/18 10:33:07 unraid.go:224: Unraid Box Condition: &{NumDisks:7 NumProtected:7 Synced:2015-09-13 11:46:18 -0500 CDT SyncErrs:0 Resync:0 ResyncPrcnt:0 ResyncPos:0 State:STARTED Size:9001514868736 Free:981883203584 NewFree:981883203584} I: 2015/09/18 10:33:07 unraid.go:225: Unraid Box SourceDiskName: I: 2015/09/18 10:33:07 unraid.go:226: Unraid Box BytesToMove: 0 I: 2015/09/18 10:33:07 unraid.go:237: Id(1); Name(md1); Path(/mnt/disk1); Device(sde), Free(70.2G); NewFree(70.2G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070048629); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(2); Name(md2); Path(/mnt/disk2); Device(sdd), Free(116.8G); NewFree(116.8G); Size(1.8T); Serial(WDC_WD20EARS-00MVWB0_WD-WMAZA0897315); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(3); Name(md3); Path(/mnt/disk3); Device(sdc), Free(291G); NewFree(291G); Size(1.8T); Serial(WDC_WD20EARX-00PASB0_WD-WMAZA5099252); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(4); Name(md4); Path(/mnt/disk4); Device(sdb), Free(436.5G); NewFree(436.5G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070063815); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(5); Name(md5); Path(/mnt/disk5); Device(sdf), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0371541); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(6); Name(md6); Path(/mnt/disk6); Device(sdh), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0376504); Status(DISK_OK); Bin(<nil>) Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 I've restarted the container and even completely removed it. Latest version of unbalance and version 6.1.2 of unraid. This shows up in the log: I: 2015/09/18 10:33:07 unraid.go:237: Id(1); Name(md1); Path(/mnt/disk1); Device(sde), Free(70.2G); NewFree(70.2G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070048629); Status(DISK_OK); Bin(<n$ I: 2015/09/18 10:33:07 unraid.go:237: Id(2); Name(md2); Path(/mnt/disk2); Device(sdd), Free(116.8G); NewFree(116.8G); Size(1.8T); Serial(WDC_WD20EARS-00MVWB0_WD-WMAZA0897315); Status(DISK_OK); Bin($ I: 2015/09/18 10:33:07 unraid.go:237: Id(3); Name(md3); Path(/mnt/disk3); Device(sdc), Free(291G); NewFree(291G); Size(1.8T); Serial(WDC_WD20EARX-00PASB0_WD-WMAZA5099252); Status(DISK_OK); Bin(<nil$ I: 2015/09/18 10:33:07 unraid.go:237: Id(4); Name(md4); Path(/mnt/disk4); Device(sdb), Free(436.5G); NewFree(436.5G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070063815); Status(DISK_OK); Bin($ I: 2015/09/18 10:33:07 unraid.go:237: Id(5); Name(md5); Path(/mnt/disk5); Device(sdf), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0371541); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(6); Name(md6); Path(/mnt/disk6); Device(sdh), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0376504); Status(DISK_OK); Bin(<nil>) Can you go the command line and show me the output of $ df --block-size=1 /mnt/disk* Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 Sure thing, here ya go: (thanks in advance) root@unRAID:/mnt/cache/appdata/unbalance# df --block-size=1 /mnt/disk* Filesystem 1B-blocks Used Available Use% Mounted on /dev/md1 2500419588096 2425063636992 75355951104 97% /mnt/disk1 /dev/md2 2000337846272 1874974928896 125362917376 94% /mnt/disk2 /dev/md3 2000337846272 1687861276672 312476569600 85% /mnt/disk3 /dev/md4 2500419588096 2031731822592 468687765504 82% /mnt/disk4 /dev/md5 2999127797760 18181910528 2980945887232 1% /mnt/disk5 /dev/md6 2999127797760 33751040 2999094046720 1% /mnt/disk6 - 16776654848 369299456 16407355392 3% / root@unRAID:/mnt/cache/appdata/unbalance# Quote Link to comment
dertbv Posted September 18, 2015 Share Posted September 18, 2015 I've restarted the container and even completely removed it. Latest version of unbalance and version 6.1.2 of unraid. This shows up in the log: I: 2015/09/18 10:33:07 unraid.go:237: Id(1); Name(md1); Path(/mnt/disk1); Device(sde), Free(70.2G); NewFree(70.2G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070048629); Status(DISK_OK); Bin(<n$ I: 2015/09/18 10:33:07 unraid.go:237: Id(2); Name(md2); Path(/mnt/disk2); Device(sdd), Free(116.8G); NewFree(116.8G); Size(1.8T); Serial(WDC_WD20EARS-00MVWB0_WD-WMAZA0897315); Status(DISK_OK); Bin($ I: 2015/09/18 10:33:07 unraid.go:237: Id(3); Name(md3); Path(/mnt/disk3); Device(sdc), Free(291G); NewFree(291G); Size(1.8T); Serial(WDC_WD20EARX-00PASB0_WD-WMAZA5099252); Status(DISK_OK); Bin(<nil$ I: 2015/09/18 10:33:07 unraid.go:237: Id(4); Name(md4); Path(/mnt/disk4); Device(sdb), Free(436.5G); NewFree(436.5G); Size(2.3T); Serial(WDC_WD25EZRX-00AZ6B0_WD-WCC070063815); Status(DISK_OK); Bin($ I: 2015/09/18 10:33:07 unraid.go:237: Id(5); Name(md5); Path(/mnt/disk5); Device(sdf), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0371541); Status(DISK_OK); Bin(<nil>) I: 2015/09/18 10:33:07 unraid.go:237: Id(6); Name(md6); Path(/mnt/disk6); Device(sdh), Free(0); NewFree(0); Size(0); Serial(WDC_WD30EFRX-68AX9N0_WD-WCC1T0376504); Status(DISK_OK); Bin(<nil>) Can you go the command line and show me the output of $ df --block-size=1 /mnt/disk* I added a few drives over the last couple of weeks. When ever i added hem I had to restart the unraid server before unbalance would see them as having disk space available.. Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 Thanks for the heads up derbtv ! I need to research why a volume inside a containter doesn't reflect physical changes on the host, such as adding drives, formatting drives, etc. dnoyeb, if you haven't restarted yet .. can you repeat the command but from inside the containter, that is: $ docker exec -it unbalance bash then $ df --block-size=1 /mnt/disk* (you can just exitwhen you're done) Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 Am I doing something wrong here? root@unRAID:/# docker exec -it unbalance bash Error response from daemon: no such id: unbalance Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 No problem ! You just need to ssh to the unraid server, then execute $ docker exec -it unbalance bash this will drop you inside the containter and you will see your prompt change to something like root@23423dafceaeb, or similar then you can repeat the df -h command Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 Heh, so guess it was case sensitive, had to capitalize the BALANCE part of the word and it worked. root@unRAID:/# docker exec -it unBALANCE bash root@7329b09743c1:/# df --block-size=1 /mnt/disk* Filesystem 1B-blocks Used Available Use% Mounted on /dev/md1 2500419588096 2425063636992 75355951104 97% /mnt/disk1 /dev/md2 2000337846272 1874974928896 125362917376 94% /mnt/disk2 /dev/md3 2000337846272 1687861276672 312476569600 85% /mnt/disk3 /dev/md4 2500419588096 2031731822592 468687765504 82% /mnt/disk4 rootfs 16776654848 370003968 16406650880 3% /mnt rootfs 16776654848 370003968 16406650880 3% /mnt rootfs 16776654848 370003968 16406650880 3% /mnt root@7329b09743c1:/# Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 Ok, so I stopped all the dockers and then restarted the docker service /etc/rc.d/rc.docker restart and now the sizes are showing up correctly. So at least I didn't have to restart the whole server / array etc.. thanks for your help. Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 Ok, so I stopped all the dockers and then restarted the docker service /etc/rc.d/rc.docker restart and now the sizes are showing up correctly. So at least I didn't have to restart the whole server / array etc.. thanks for your help. That's good to know ! I'll see if I can find out anything else about this matter. Nice solution though, just restarting the docker service. I'll add a note on the front page. Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 one thing that would be really slick if it could do: work with wildcards... For example, pull all files from /Movies/T* so it would grab all folders that began with T... Just trying to level out my drives a little and find that the Movies folder has 2tb of files in it, don't really want to move all of them off; just want to move like 1tb of them. Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 I'll take a look at it. Doesn't seem difficult, but not trivial either Quote Link to comment
dnoyeb Posted September 18, 2015 Share Posted September 18, 2015 Right on... I just see something where people can get a little more granular being extremely helpful in forcing a balancing job on the drives when adding new ones... for example, select a sub folder then display all the folders with the ability to checkbox them or something... That way one could select a 60 movies of 120 and move them to a new drive and balance stuff out... anyways, just an idea, love the app and using it right now to level stuff out best I can Quote Link to comment
jbrodriguez Posted September 18, 2015 Author Share Posted September 18, 2015 right, it was first sort of suggested by glave in http://lime-technology.com/forum/index.php?topic=39707.msg380924#msg380924. I have to be honest, I've thought at some of the implications and it does seem a bit complex, mostly due to the UI software components you'd need to make something like this work. Anyways, unBALANCE is due for a rewrite on the UI (client) side, now that I've gone full React rather than Angular. An additional rewrite, is to phase out my own notification system, favoring unRAID's own ... starting with 6.1 we now have a better API to access unRAID's email notifications and I intend to profit on that I'll try to find some component/(s) that help me fulfill the UI requests. Don't hold your breath, don't sit tight, it might take a while Quote Link to comment
scottc Posted September 19, 2015 Share Posted September 19, 2015 Currently unBALANCE does not work with version 6.1.2 It does not recognize that the array is running Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.