Froberg

Members
  • Posts

    262
  • Joined

  • Last visited

Everything posted by Froberg

  1. You have to share the mapped sync folder over the network if you want to access it via file explorer and give your user permissions to the share. You have to map the sync folder as a user share. Or just add stuff to your client and let it sync.
  2. Usually you mean to sync from somewhere, to somewhere else. Not necessarily two-way. If you want to sync from a device to your NAS, for example, you should select the stuff you want to sync on your device and get a read key. Then enter that key on your NAS GUI and it will start syncing everything in that folder on the device. The same goes the other way around. This is a filesharing or sync application, not a dropbox type thing. If you require that, go for something like NextCloud or OwnCloud.
  3. That's interesting - and good to know, too, as I might need to use that myself eventually. Thanks for sharing. I was going to get back to you, but I've been swamped at work.
  4. Just add however many volume mappings you want and specify the specific folders? The default set-up is intended, I believe, for simple 1:1 sync between two folders on two devices, using it to sync multiple locations is outside of normal usage, but should work fine regardless. Adding the specific volume mappings should make them visible in the UI just fine though.
  5. Technically possible. You can map any share to any docker container via docker volume mappings. You'd still have to assign the path to a user somehow, but that's more on the Nextcloud side I suspect - or via symlinks maybe. I managed it long ago on OwnCloud, so it should be do-able, just haven't bothered with it on my own install.
  6. Dude I have 16 GB memory for 30TB of data and 9 docker containers - and I'm barely using any at all. Do not worry about it. Sample pic;
  7. Keep in mind, if you're unaware, that with UnRaid you're not exactly running RAID. You need to understand this, as you lose all performance benefits there are with using traditional raid levels like Raid5. I.E. in most cases you're bound by single-disk performance limits. There's also inherent in that, a much smaller risk of complete data loss since each disk is a self-contained entity and can be read with anything that can read xfs. This is also why cache is crucial - as you want lots of speed when you need to quickly transfer a file. You will experience slowdown if you want to transfer files larger than your available cache, as soon as the cache is full. I'm assuming you've already read up on this, but just to be safe I thought I would mention it. FreeNAS is all about data integrity, bitrot and such and has a large overhead requiring a ton of memory. I did not find it as easy to use as UnRaid. I was blown away by the simplicity of unraid and I cannot recommend it enough.
  8. Yeah all is done via GUI. In my opinion having tried all open NAS solutions on the market, and some closed ones, it's the best one out there. KISS principle rules here. 🙂
  9. No worries. Best of luck and keep us posted. Be sure to read up on community application setup and in general have a look at CA applications, there are some tools that might be of use. Dynamix, too. I use system temp from dynamix, community apps, power control buttons because they're easy and I'm sometimes lazy, mover tuning, appdata backup, appdata cleanup etc. Worth taking a look at least when starting out and get a feel for things.
  10. @testdasi Rated for 24/7 operation is nice to have, in my opinion. But it is just that, an opinion. I like the added warranty too, on IronWolf for example. But yes you don't get the benefit of the RAID specific features. I am purely speaking on a reliability and experience with the drives over regular consumer drives for similar loads. I think removing the hard-drives from the external exclosures, although they're cheaper, I believe you void the warranty doing it? If not, that might be something I need to be doing, too, for my secondary box. Agree with you on avoiding batches of drives from the same lot/vendor at one time.
  11. If you like WD I can recommend RED drives for UNRaid. I use them myself. (10 and 6TB versions atm) Doing quick calculation it looks like your raw data exceeds 8TB used based on your screenshots, so I would recommend - if you want to get started cost-effectively; * Getting a second SSD for the cache/VM drive. * Getting an 8TB drive for the parity drive. Two for dual partity if you can afford it. * Keep your remaining drives in the system as is. You have the S-ATA ports for it. It's only some controllers that can cause issues - I am using one myself Marvell/Intel but haven't had any issues - although your mileage may vary. * Replace any drive as soon as it fails/starts to cause issues or you run out of capacity with another 8TB Drive. * Replace any other drive as soon as it fails/starts to cause issues with yet another 8TB drive. Based on your growth estimates and usage that should see you with 4x8TB drives inside some reasonable time-frame, at which point you can retire all smaller capacity drives by removing them from the array entirely, leaving you with only NAS capable high capacity drives. It will also spread your investment over a longer period of time, allowing you to use your hardware for longer which is better for the wallet, if nothing else - and leave you with fewer spinning drives when finished, with a clear upgrade path for the future. If 12/16TB drives become more cost-benefitial down the line, you can always replace a parity drive with one, and re-add the parity as a data drive. This is pretty much how I started my home-server journey before actually starting to spec them out from scratch. Oh, and in answer to your question; both main server and backup box is running UNRaid - using Resilio sync to keep a working copy. Cost per TB for the low-cost server is much lower. And for my amount of data (near 30TB) more cost effective than online backup solutions. Using Resilio it can also do backup via internet so the backup box can be physically located anywhere without issue, just have to get them to switch it on when it needs to be backed up. But in my case, a 1:1 copy with no versioning is totally fine. If you want versioning and more granular backup levels there are other docker containers for that, like duplicatii which can even handle multiple sources as backup destination. I believe backblaze might be an option, too, if not, there's always amazon and the like. With duplicatii your backups can also be encrypted before transit ensuring your privacy. @testdasi appreciate the heads up on NVME's - I was ill-informed on that one.
  12. NVMe works just fine. You should make sure to install dynamix SSD TRIM from community applications. Reliability is only a factor on NVME without adequate cooling as far as I know. Parking; well there are pros and cons. I let them spin down on my backup box, since that keeps drive temps down in the constrained case I am using for it, but for my production server, they are set to spin down after like five hours of inactivity, but given the apps I am running I have yet to see it happen. There is some debate as to whether or not it's actually more harmful to let them spin down. (Head parking stresses the drives.. ) My thinking on the subject is that my main server has NAS drives designed for 24/7 operation, so I prefer to keep them spun up. My backup box uses regular consumer drives/non-NAS specific drives (basically whatever is cheap..) and are allowed to spin down for thermals and power consumption. I have 6TB NAS drives that have been spun up for 4 years so far, 365, 24/7 without issue. I have nine docker containers running, including plex, and I am only using like 40-50GB of my cache drive for docker-data. Just remember to keep your mappings in order, so that any storage requirement is not on the cache. I keep the plex DB on the cache for performance reasons, but for my nextcloud installation all of the actual data resides on hard drives, while the service itself is on SSD.. if that makes sense.
  13. Backup server is quite functional, thanks For my production server I go for NAS drives that are rated for it, but the backup box is only running a few days a month so it matters less in my mind. Since your capacity requirements aren't exactly huge, I think you could benefit from going with larger drives, look at cost per terrabyte, I believe 8TB is the most cost-effective at the moment, at least in my country. Then again, if you don't see yourself growing the data you could do with 4TB drives instead. It's just important to remember that your parity drive MUST be larger, or as large as, all other drives in the array. If you go with only one SSD it won't be protected by parity, so you should consider backup routines if you care about your plex database or the VM data. You should do that anyway, but doubly so if you only have the one drive. I have 2x256 SSD in my production and 2x128GB in my backup box. Think about what you need to run on SSD's. You do not need any data on SSD drives. If you need data drive for a VM, you can easily assign it a share on the spinning-drive array for data-storage and keep the data protected that way, and just use the SSD for OS only - reducing the space requirements. Let me know if you have any specific questions regarding that and enjoy unraid!
  14. That did it. Documentation for those that want it: Permissions are now persistent. That's awesome. Kind of want to suggest this become a part of the main image, but hey, at least there's an option to get it workin'. Now to see if the watch directory will function. Thanks man, made my life a whole lot easier.
  15. Yes I figured as much, it doesn't change how the docker behaves when starting up though. It still sets restrictive permissions on the entire transmission directory. I haven't been able to figure out what is responsible for that.
  16. Okay, I did some cleanup and appdata folder is now moved to a cache drive instead. I've set the TRANSMISSION_UMASK setting at 000 and that does get replicated in the settings.json file. (Anything entered manually gets removed on restart..) It still sets permissions to this when looking at the log during bootup; Generating transmission settings.json from env variables sed'ing True to true Enforcing ownership on transmission config directories Applying permissions to transmission config directories Setting owner for transmission paths to 99:100 Setting permission for files (644) and directories (755) umask value is 0 in settings.json - chaned from "2". No change. So what's setting the permissions? 🙂
  17. Hi man, thanks for responding. Looking at the directory structures, I just realized that the data folder specified is the actual transmission application data. I'll move that now and check around for further settings. I was unaware that changes to settings.json would be persistent.. But it seems like the docker is setting the file permissions even before transmission is started, effectively making me unable to use the share from another workstation over SMB. If you look at the log while starting it up, you'll see the same behaviour I'm sure. Is it a byproduct of privileged access and thus not something that can be avoided? I mean, it'll be a bit more to manage, but I figure I can work around it.. but it's been very insistent about not even loading anything I manually put in to the watch directory even after setting permissions manually too. It's a bit odd. Anyway, I'll test out some more on my end and see if I can learn something. Thanks again.
  18. Hi all Finally decided it was time for VPN and set up this wonderful docker. Everything is working flawlessly using PIA, except for some permissions weirdness. I'm not used to transmission taking control of all files and folders so forcefully. As in, I can't even use my watch directory because of it. Can anything be done to facilitate this? Medusa hasn't run yet, but I'm betting that stuff like unpacking will become problematic as well. Edit: I've fiddled about some more with UMASK mentioned on the previous page with no luck. Here's what it's setting in the log for the docker: Enforcing ownership on transmission config directories Applying permissions to transmission config directories Setting owner for transmission paths to 99:100 Setting permission for files (644) and directories (755) Basically I want to be able to use my watch folder via SMB share and for stuff like Medusa to be able to move stuff around. Nothing too fancy. I have it set up very neatly with a regular transmission container. Is there any reason that this particular instance has to play naughty with the permissions? Is it to do with privileged access mode? If I disable that, it won't run at all.
  19. I've used Resilio way back to sync my data from my primary box to a secondary NAS. I've recently purchased a SOC motherboard and set up another home NAS to replicate this. Unfortunately, resilio seems content to sit at around 120mb/s. If I disable tracker options and set it to use LAN only, it does nothing. I've tried fiddling with the power user settings, encryption to see if that was the overhead and whatever else I can think of. Any ideas? Both boxes have dual gigabit NICs and should be able to at least get gigabit for the transfers. It will get there.. eventually.. of course, but the idea was to turn on the backup box for a few days a month to get synced and to run a parity check and otherwise keep it spun down. Any suggestions as to where I might be wise to look? They're both set to bridge on the respective docker containers, would host maybe have an impact? Solved it myself; Setting the share on the receiving box to forced LAN and changing the docker containers on both sides to host mode proved to drastically increase speed.
  20. NVME without active cooling would be a concern and could be the cause of instability. Those bastards can get toasty really quickly.
  21. Are there any differences in bridgemode/host network configuration for the dockers? i.e.:
  22. Not sure why you'd mix a spinning disk and a 500GB SSD and expect speed... but what's the file system on there? Sounds like you didn't set it up for RAID1 maybe, if all data is inaccessible?
  23. I have not messed with this, so take it with a grain of salt, but one option would be to split up your shares so that the normal file-shares only use spinning disks and your SSD's only encompass the VM shares. It's a bit more manual, but should allow you to make sure where the VM data is located. Again I say should because I don't think this would be considered best practice.
  24. Sounds like my origin story, too, almost all of it. My computer was far shittier though and came with Windows ME. I remember one time fiddling with screen refresh rates, not knowing how they worked, and being unable to use the computer. My mom was NOT happy, lmao. Figured it out, because that's how you learn, but man was I worried because it was so expensive. My second one was an Athlon XP 1800 I believe. Massive upgrade!
  25. Your age is showing! My first system was a 443mhz! (To be fair, did start with windows 3.1 in school, but yeah) In Denmark we call laptops that are heavy for "luggables". This seems to fall in to that category.