JustinAiken

Members
  • Posts

    452
  • Joined

  • Last visited

Everything posted by JustinAiken

  1. Okay, was successfully able to migrate from the other container without adoption! cd /mnt/cache/appdata cp -r CrashPlan/ crashplan-pro chown -R nobody:users /mnt/user/appdata/crashplan-pro/ cd crashplan-pro/ mv id/ var/ rm -r cache rm -r bin rm -r log All docker template values are default, except for: - config -> /mnt/user/appdata/crashplan-pro - data -> /mnt/user (To match old container)
  2. https://gist.github.com/JustinAiken/586dc6f5f8844420efdab6b6805b0810 That's the old one.. when I cloned it over, I renamed `id` to `var`. I also tried clearing out the cache/log dirs. Also, with the 99/100 user/group mask in your Docker, what perms should the appdata be? nobody/users + 777 like the unraid default `newperms`, or something else?
  3. > Since you are using the gfjardim's container, re-using the appdata is more complicated, since its content and the way stuff is saved is not exactly the same. Okay, home->pro working perfect with gfjardim's. Now trying to switch to this alpiney-pro one, not having much luck. - Switched `/Storage` for `/data` to match my existing path to `/mnt/user` - Copied the old `/appdata/crashplan` to `appdata/crashplan-pro`, renamed the `id` dir there to `var` - No luck - it wants me to sign in (and sign-in fails) Any other files/settings that need to be remapped to reuse the old appdata?
  4. Upgraded Home to Pro, this container picked up the change and carried on working perfectly. If I stop and start it, it'll be the old green version before it upgrades itself to the blue version, but that doesn't take long, and is near-transparent.
  5. If I'm currently on Crashplan Home using gfjardim's container, and I want to upgrade to Pro... - Should I do the Home->Pro upgrade first, or switch containers first? - For switching containers, is it possible to reuse the appdata config, without going through the "adopt backup" ? - (Scared of doing something wrong with the "adoption", and having to start over uploading TB's...)
  6. Small server goes down alot... but the sickrage devs set up their own mirror at https://donna.devices.wvvw.me/sickrage/sickrage.git - try that instead.
  7. If anyone else is stuck, I found a mirror... $ docker exec -it sickrage /bin/bash $ git clone https://distortion.io/git/mirror/sickrage.git /app/sickrage $ chown -R abc:abc /app /config Wait a few minutes, and the webUI will be up okay
  8. Wow, this echol0n is a prick. Is there a mirror of sickragetv/sickrage anywhere that the docker could point to in the meantime? Note to self: start mirroring repos I care about...
  9. Finally! Weeks later, all converted. Copying files to my new XFS cache drive off the old reiserfs cache drive (mounted with Unassigned Devices), then I'll fire up all the dockers and see how long I go without getting a shfs hang..
  10. > Removing a disk will require rebuilding parity. After I moved all the files finished moving off, I was going to try this "shrink without rebuild": https://wiki.lime-technology.com/Shrink_array#The_.22Clear_Drive_Then_Remove_Drive.22_Method Will that not work?
  11. > It would be faster to do it largest to smallest Ah crap - already started the smallest... Ah well, after I'm done cleaning off the 1.5TB drive, I'll take it out of the array, and use that physical slot to bring in the other blank 8TB so I can start moving the big reiserFS over that that. > I would copy, not move, the entire contents of the 8TB ReiserFS disk to your new empty XFS disk How do you do that, in such a way that parity isn't invalidated? Doesn't unraid get confused when it has the same file on two different disks? > then when the copy is complete and verified, format the ReiserFS disk. How to verify? > and completely REMOVE all the 2TB and under drives. Planning on removing the 1.5TB and at least some of the 2TB's... > I would only use BTRFS on a perfectly stable server. It's too brittle for my liking. I'll get that one before the rest of the reisers - I'm not attached to BTRFS, I just randomly picked that one back before the strong community preference for XFS had formed
  12. > Post a screenshot of your Main page of the GUI if you want a gameplan to start the process. Attached - You can see the last drive is an empty 8TB - that was my precleared hot spare, I just added it in, formatted as XFS, and started moving data from my smallest drive over. I figure I'll move a few 1.5TB/2TB drives onto there, then xfs them up. I also have another 8TB spare (precleared, but no room to plug in - was going to keep for next time a drive died). > Also, do you keep strict control over which files / shares reside on which slot number? Nah... occasionally I use `diskmv` to load balance them so related folders are grouped, but don't have any per-drive shares.
  13. I've been having the "100% CPU on shfs, everything unresponsive" about once a week, and having to hard reboot. Been happening on all of 6.3.x's (including the current 6.3.5), is the only thing I don't love about unRaid. I have almost all reiserfs drives, guess it's time to start the slow migration off...
  14. So I'm working on templating a dockerhub container to make it unRaid-friendly: https://github.com/JustinAiken/unraid-docker-templates/blob/master/bitcoinunlimited/bitcoinunlimited.xml - How's that looking so far - How do I get it added to CA so it shows up when someone searches "bitcoinunlimited" without having to go to the dockerhub option?
  15. Uneventful upgrade from 6.3.4, thanks for the quick security update!
  16. I would absolutely love that! I've spent a good chunk of the day getting something not near as pretty made.
  17. Great plugin! Just discovered, can finally remove my old google sheet! So far I haven't seen any problems, like those described above.. A few feature requests: The main page is very slow to load - feels like it's trying to load all available data from smart reports and stuff before it starts. Could it be AJAXED up, so that the page and static data is displayed instantly, then update in stuff like load cycle count that changes? Maybe an option for a newline after some elements? The data on the label is a bit hard to read; would be nice to pick which things get their own line Huge +1 to the colored heatmap idea!
  18. Rebuilt a 2TB drive onto a new 4TB drive (ancient 2TB started showing bad sectors) using 6.3.4 - completed without issue, quickly
  19. Been running 6.3.4 without incident for a day since a smooth update from 6.3.3 :+1:
  20. I have a massive amount of "The following files exist within the same folder on more than one disk."'s... What's the best way to fix that? Do I need to just manually `stat` the files on both disks and manually delete the older/smaller one? Or is there an automated way to fix?
  21. ``` PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14102 root 20 0 1259740 48048 876 S 200.0 0.3 605:19.65 shfs ``` - I do have cache_dirs, but not enabled for user shares - Mostly reiser drives, but a few XFS - Too froze to get diagnostics or anything
  22. On 6.3.0/6.3.1/6.3.2, I've noticed files I create on the Mac through a samba share are sometimes ending up on the unRaid server with odd permissions: $ touch /Volumes/cache/hi $ ssh tower $ ls -lart /mnt/cache/hi -rw-r--r-- 1 jaiken users 0 Feb 13 18:00 hi $ exit logout Connection to tower closed. $ chmod -r /Volumes/cache/hi $ ssh tower $ ls -lart /mnt/cache/hi --w------- 1 jaiken users 0 Feb 13 18:00 /mnt/cache/hi Isn't it supposed to automatically `nobody:users` and set all the permissions for you?
  23. Actually, since updating noticing that my Mac keeps disconnecting all the SMB shares, and I have to manually reconnect. Seems to happen mostly when I open a large directory spread across many drives...