BLKMGK

Members
  • Posts

    978
  • Joined

  • Last visited

Everything posted by BLKMGK

  1. These are a little more expensive now but still a good deal. With a the 15% coupon and keeping the return policy I paid $265 for a pair or $132.88 apiece. Free shipping at that price level, should be here sometime next week.
  2. 1TB from Sandisk - $180 https://www.amazon.com/dp/B01LY5ZZ4P/?kinja_price=210&tag=lifehackeramzn-20&ascsubtag=40c72f34238df446a57f31110fb0327c00986218&rawdata=%5Br%7Chttp%3A%2F%2Fjalopnik.com%2F%5Bt%7Clink%5Bp%7C1788291346%5Ba%7CB01LY5ZZ4P%5Bau%7C5727177402741770316%5Bb%7Clifehacker
  3. So, without stopping the mount script just run the unmount script? Or stop the first then run unmount? I have gotten errors terminating one then running the other and try not to do it. It's a new feature in rclone with cautions around it so I'm not too worried about it. I've mounted the ACD from another machine t monitor things rather than risking the NAS Have hit about a TB, another 30+ to go lol.
  4. I've been using HTOP to find the process and kill it after the first time. Stopping the server kills it as well without a full reboot. User Scripts doesn't properly kill the script, but it's not hard to find. Honestly how are they expecting you to properly stop a mount with rclone? The Fuse unmount command won't stop it if it's running will it? Must break or kill the process then use Fuse? Seems clunky if that's right but I know it's still under development. Comment on the rclone plugin. If I edit a script and apply to save it, I get taken back to the top config script (from memory). To have it moved I have to drop back down to the script I just saved and move it. It might be better to apply and not goto the top so I can just move it? Or have I missed something? P.S. If you're moving large multigig files use the --acd-upload-wait-per-gb switch. This can be used to set a wait state that kicks in after each gig of data. This is used because Amazon can take up to 30 seconds for the data to be posted. If rclone doesn't see it and this doesn't make it wait rclone will start uploading it again and spiral itself into the a circle. Need to add this to mine lol Edit: Looking at the notes for 1.34 released recently it *looks* like the wait per gb has a default already and can scale by size of file so this might not need to be set unless you see errors.
  5. I'm not mounting to it. Actually have my mount in a bad state right now, both rclone and fusor are telling me the device or resource is busy and if I attempt to LS the mounted directory I'm informed that the transport endpoint isn't connected. I'll cycle the box and clear that when I don't have anyone else trying to access it This happened after breaking a mount command with rclone, then unmounting, then trying a mount script via User scripts. Not sure what occurred but I see an update to UserScripts that just came down fixes some PHP errors so I'll blame it on that Truly appreciate the clarification on user and user0, whew! I'd have gotten to looking it up eventually. I like the idea of adding a Read-Only Amazon share. I've gone ahead and used your code in the Extras section. I assume this will come into play upon a reboot? I won't mess with it for now but doing it R/O seems a smart way to go. When I fix the issue with my current mount I'll move to using this one - thanks! For anyone who cares here's the rclone command I'll be using to backup things for now, examples always seem to be helpful for me. My target name is crypt and the configuration will create a subdirectory on the target for me. I have a 100megabit connection, I'm reserving 6megaBYTES for upload. I'll be generating a log file that I'll try to tail in a shell. I'm limiting file size to 50Gigabytes as that's ACD max file size and my backups exceed that (stupidly) rclone --log-file /mnt/user/work/rclone.log --max-size=50G --bwlimit=6M sync /mnt/user/ crypt: Edit: Bah, the state of the mount prevents me from running. Server shutdown is going poorly too, the endpoint being in a user share likely isn't helping. May have a lengthy parity check in my future. Okay finally fusermount -u worked and sure enough the server stopped fine, whew.
  6. The "disks" folder is normally created by the Unassigned Devices plugin. Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container. If you don't require that, then it doesn't matter where the mount is located. Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion. I went ahead and installed the Unassigned Devices plugin, I now have a Disks folder. However it's completely empty and I see no way to get to it from SMB etc. so I'm not sure what good it would do for me. Assigning mount points under an existing share works well for being able to see the cleartext progress of what's being loaded to ACD. One interesting issue I'm puzzled about though that I somehow hoped that the Disks share might solve is that I appear to have two main subdirectories that I need to backup: user and user0. At first glance the data in them appeared to be the same but then I began noticing that there were files in each of them that didn't exist in the other - or the sorting was whacky I've tried specifying two different targets for rclone but this appears to be a no-go. I'm not certain what to do - two different jobs run back to back maybe? Anyone else gotten around this? I could jigger mount symlinks or something but i don't want to accidently recurse something or end up moving data 2x. I've got enough data that doing just user will take a month or so, hopefully a solution presents itself or I'm stupid and missed something lol
  7. Yup, found it! Honestly? I need to find a damn GUI to build the commandline for this sucker. I posted a few useful flags above!
  8. What I'm trying to tell you is there's no such directory as "disks" on my system and apparently on the systems of others too. Do I need to have Docker required in order for that to appear? I'm not using it and I don't know but please be aware that not everyone has that directory and it can be confusing when others mention it. I have an ESX server that runs many of my external sorts of programs in VMs but I'll get around to Docker and KVM eventually I look forward to seeing rclone as a daemon. Still puzzling it out but have run a successful test of both it and Syncovery, ACD is rocking for me! Close to a half TB went up no problem. How long does rclone need to scan a large datastore? I'm wondering if I should do this all in one pass or not - it'll take "awhile" lol. No problems starting where it left off I wish it could tweak files ala rsync. Lots to learn but really appreciate the plugin to access the scripts more easily! P.S. Looks to be nothing native in rclone to throttle it's bandwidth? I can do this at my firewall but it's a shame it's not native as it certainly has the ability to be a hog! Edit: I was wrong, there is a Bandwidth limit. It's --bwlimit. Looks like there's also a 50Gig limit on ACD. Using --max-size=5100M should solve that. I've got a ton of system backups with 500gig files There's also a dry run flag, -n but it can still take ages to return. I am sending output to a log file with --log-file but cannot access it via Windows as it's permissions are whacked since I run as Root so I need to sort that.
  9. Good thread! I too ran into the issues with mount points being abandoned and getting stuck, the fusor command to unmount cleared it right up. I found that the user script plugin had issues with the comments in the script for some reason, removing them and leaving the bash line cleared it. I do notice folks talking about a /mnt/disks directory. Under /mnt I too have my disks listed and two user directories. User and User0. Do others have something named disks? I've been creating amount point under one of the directories in /mnt/user/ and it seems to work fine. Still figuring out how best to run this in the background and schedule it but this seems viable to Syncovery which I'd been looking at. Have folks figured out exactly what's needed to be stored offsite to make recovery easiest? Is it simply the config file from rclone? Is anyone encrypting that file? Storing that in the ACD in a zip with password might be a good way, any reason to include rclone with it maybe? I'd hope they won't break compatibility along the line.
  10. No I hadn't, thank you! I've been tinkering with Syncovery's beta and a manual install of rclone on a Linux box I've got, hopefully this will make life easier - i'll try it now
  11. I'm was on the cusp of purchasing a Syncovery license to synch with ACD but rclone on my server directly might be more straightforward but crypto is a 100% must for both content and filenames. Rclone appears to support this, is anyone using crypto on their UnRAID backups? How are folks throttling bandwidth usage? Lastly, what rclone files exactly need to be backed up in order to restore access in the event you need to do a restore on fresh hardware? My goal in doing this is to have a secure backup in the event of a fire, flood, or theft so an offsite backup of critical files to decrypt my backup is kind of important Some of the best information I've found on rclone has been on datahoarders and this thread below has been helpful. Folks might also be interested in acd_cli for mounting ACD drives remotely although it looks like rclone may be able to do that now. http://rclone.org/crypt/ this is the rclone documentation for rclone crypto.
  12. Grabbed two via the regular Newegg site - thank you! I had just missed out on a 4tb the other day so this works out well to upgrade parity and a data drive
  13. Running into this exact issue. I was rebuilding a drive after a disk swap and Mover kicked off after a few hours. Got up the this morning expecting to find my disk rebuilt and found an estimate of 60days with only 27% having been completed. The parity speed was in the hundreds of K. I've stopped the parity rebuild and mover is taking ages to move data - 11gig of movie files. I have plenty of CPU, plenty of RAM, Mover uses rsync yes? I guess I'm just antsy because I know my array isn't protected by parity right now but this seems to be taking quite a long time. The read write stats seem to be moving awfully slow and the count on the data used on cache is also slow although I realize it won't move until files are erased from it by Mover. Another file just moved (I've got logging for Mover turned on) and it took 7 minutes to move a file that was under a 1.5gig in size. That seems glacial but I've got no way to figure out where that bottleneck might be Naturally one of the disks that Mover is looking to push files to is the one I've just replaced with a larger disk as the previous was completely full so this likely plays into the problem. It would be helpful if there was some way to halt the Mover process. I suspect a reboot wouldn't harm things and would halt Mover until it's scheduled next. But I cannot currently stop the array because Mover is running and I refuse to do a hard boot with the array in the state it's in. It might also be nice if Mover and Parity weren't allowed to conflict - Parity should take precedence IMO. It makes perfect sense that the two processes thrash the disk subsystem and had I realized the issue I'd have wanted to avoid it for sure!
  14. Just harvested and attempted to use a 5TB external portable like this one. I ran into some seriously bizarre issues clearing the drive. Performance started out at 150mb/s but within an hour was down to 30mb/s and it never rose. I did some research and found a few postings about this same thing https://lime-technology.com/forum/index.php?topic=34052.0 It seems that the 5TB may be shingled drives, I feel pretty certain the 8TB drives are. When I tried to abort the clearing process my system hung badly, it's currently undergoing a parity check and I've packed up the drive to send back. Not comfortable using one of these as a Parity drive and wish I could've tested it more but I'm not willing to risk data over it. Supposedly the portable drives have aggressive head parking and sleep cycling in the firmware and may not be too great for our usage patterns. Food for thought if you're considering one at least...
  15. What are you using for parity? Reviews on NewEgg are mixed but overall seem to say that the drive works fine. I'm worried that for parity it might be slow but am intrigued for sure! I'm clearing a 5TB normal drive now for parity but for the size these seem a pretty good deal if they work well. I'm using a cache drive so write speed might not be too big a deal however parity checks could really drag out?
  16. http://www.amazon.com/Seagate-Desktop-External-Storage-STDT5000100/dp/B00J0O5R2I/ref=sr_1_1?s=pc&ie=UTF8&qid=1448735873&sr=1-1&keywords=Seagate+Backup+Plus+5TB Easy to shuck, I've got one preclearing now
  17. Are these shingled drives? They're "archive" so I think they are which means they're probably pretty slow. Amazon has them for $233 http://www.amazon.com/dp/B00XS423SC/?tag=extension-kb-20 and NextWarehouse - which I've never heard of - has them even cheaper! http://www.nextwarehouse.com/item/?1657269_g10e
  18. Price was bumped to over $180 - d'oh but thanks anyway!
  19. Why map drives? Aren't they all in shares? Something like \\server\\sharename ? That way you can scan multiple disks together... Sent from my iPad using Tapatalk
  20. For ESXi sharing I have a FreeNAS VM, works great and is fast! It has its own passthru controller card and keeps me from worrying that I'll screw up my media storage somehow. Cheap media players? They will ALL work fine with Samba, it's what I've used on my unRAID since day one for XBMC and I have zero issues. Time machine however is a biggie, it's one of the big reasons I setup an unRAID server for a friend as a Christmas gift - not yet being used for TM but will be. Keeping them on old software won't hurt but there's demand for TM whether AFP is desired or not. NFSv4 sounds fine to me but don't be surprised if right after v3 is dropped we discover things that break (shrug). Worth a shot I think! Heck, I might even try using NFS for a change :-P Sent from my iPad using Tapatalk
  21. I use ClearOS on a small Atom appliance and have for years. I need to upgrade to the newest code but have enjoyed their free version for awhile. I wild like some better reporting and whatnot though for sure. I'd love to find something that logged DNS queries too! Darkstat on unRAID has been handy too. Sent from my iPad using Tapatalk
  22. I wouldn't simply reboot - use the powerdown script. Sent from my iPad using Tapatalk
  23. I'll throw my hat in on the Plex VM issue. I run this in a win7 VM under ESXi at the moment. That VM would be lucky if I've given it more than 4gig of memory although if anyone really cares I can look when I get home. I'm not sure what constitutes a large library but over 1500 movies and probably that many TV shows if not more ought to constitute that I'd think? Plex has no issues in this VM for me, memory and performance appear fine. I just tested both a movie and a TV show streamed over cellular that worked fine with the movie having to be down sampled a great deal I'm sure ;-) I may yet set this up in a VM under Xen but last I tried to set this up under Linux I ran into issues mounting my shares - I use SMB vs NFS which I suppose I should try ;-) bottom line: Plex performance in a VM should be fine if you can run it decently bare metal. Try it and see! PS I'd point out that of course you can crash the server if you run poorly behaving code on the host. This is true for ESX and any other virtualization technology. The host should be protected as much as possible! Sent from my iPad using Tapatalk
  24. There are some actual guid that look like that! My problem solved btw, Tom was prompt but HotMail ate his message :-( Sent from my iPad using Tapatalk
  25. These days I'd be seriously considering Haswell based hardware. They're faster and much more energy efficient. The ability to virtualize is a big plus HOWEVER don't buy anything K based! I built my first machine to run ESX and used a Sandy 2600K - I promptly found that the vt-d option was frayed out. Turns out that Intel unlocks these CPU and then promptly kills the vt-d option, no passthru allowed :-( Ran fine otherwise without vt-d. I also have an AMD system that when booting FreeNAS flashes a message telling me that it's a no go for KVM. I never did figure out what was up with that and moved to an older i7 9200 which seems to work fine with XenServer. I'll be trying 6.x as soon as I get my license :-) Sent from my iPad using Tapatalk