Energen

Members
  • Content Count

    380
  • Joined

  • Last visited

  • Days Won

    1

Energen last won the day on July 19 2020

Energen had the most liked content!

Community Reputation

68 Good

About Energen

  • Rank
    Advanced Member

Recent Profile Visitors

4144 profile views
  1. I'd just plan for the future and go with the largest capacity UPS that you can afford to get without being unreasonably priced. You can never have too much power, only too little. If you ever expand your hardware or want to be able to provide backup power to more devices you may find yourself needing to buy another UPS when you could have just bought a larger one in the first place. Some people here have used Bluewalker with no complaints. I like CyberPower but they do not seem to be globally available. APC is usually overpriced but is a fine brand. All will work with Unraid.
  2. I could be entirely wrong here but I suspect that the share->share transfer is slower because Unraid also has to build parity at the same time. cache to cache involves no parity therefore it is speedier, also nvme to nvme is not bottlenecked in any way by hdd to hdd (a theory anyways).
  3. Cache drive and array drive is in the same computer so network speed doesn't have any affect on it. More likely is that you are overloading your SSD/NVMe drive's onboard cache which then reduces write speed to the actual write throughput of the drive. Unless I'm misunderstanding where you're moving data to the cache drive from, or vice versa.
  4. Your "best" solution would be to minimize the number of pieces of hardware involved and use the largest capacity drives possible in one case. Your second best solution would be to actually examine if you need that much capacity to begin with. If both of those options fail then you could use a secondary case such as either of those, and to my knowledge (no direct experience with) use a SAS cable to a SAS expander card in your Unraid server. The exact details of what cabling is required and how to go about that option is beyond my knowledge, but there are people here th
  5. According to torguard's website, their anonymous proxy service includes vpn.. so if that's the case then you should just be using the vpn with the rtorrentvpn docker.
  6. Very neat, just to clarify, which one did you use? PiKVM?
  7. I would use the SSD as a cache drive, and have the VMs run off that. Is it possible to back them up--- yes. You can have the mover do it or you could do it yourself, but if your VM becomes corrupted in some aspect, that corruption is backed up as well ... so there may not be any point in backing up your VMs... You don't say what your current setup is to begin with so this would also give you the ability of using the SSD for your normal appdata cache too. You can use a cache pool to have mirrored cache disks in the event that the SSD physically fails if you wanted to
  8. All these random encryption problems, I'm starting to regret going through the process to move data and encrypt each drive one by one... I can't help as I have no intention on changing my passkey, but wow the encryption on Unraid is really problematic in so many ways.
  9. So in a roundabout way I do what you want to do, although it's a bit of setup and not quite 'out of the box' ready. I've got a Dropbox docker container running, and I've got a user script that runs nightly... Essentially this just copies the boot drive to my dropbox folder, it's not compressed (easily done), and is just kind of a just in case... I use CA Appdata Backup also but in the event that "something" unforeseen happened and my array was offline and the boot drive was dead having the CA Backup on the array does me no good.
  10. Don't take my word for it 100% since I never used this card .. I didn't feel the need to spend $200+ just to get my UPS on the network when USB / apcusbd was enough for me.. but I think you will just need to use the IP address of the RMCARD in Unraid's UPS settings to read the UPS status. f
  11. Seems that the linux vm route is the only way.. unless anything has changed since 2016.
  12. It's not so much about the features that makes it expensive, it's the fact that it's server hardware. Server hardware is just more expensive, even if it has less features than consumer hardware. One argument is, depending whether you believe it or not, is that server hardware is "more stable" than consumer hardware, in a server environment. And some of that may have gone back to the ability to use ECC RAM when consumer boards didn't have an option for ECC... a lot of consumer boards do these days, especially for AMD chips. Look at the comparison between specs for these two board
  13. So I'm probably doing something wrong with my implementation of VMs because I never really feel that a VM is good for any sort of actual productive work... it's just too slow and clunky and less responsive than an actual PC. And video editing? Forget about it...…. in my view. But a lot of that may be due to the limited specs of my hardware, for sure. I'd probably say that by the sounds of it you do not need that ASRock board.. it's a server board with more server features that you probably don't need or want. I would save money on the board and maximize the CPU and
  14. I won't speak for housewrecker but I can probably take a stab in the dark and say that scanning a folder will never happen with this tool. It retrieves information (movie IDs) from Plex in order to build the related information -- Plex already does the hard work -- GAPS would have no way of knowing any type of movie id or where to scrape info from without Plexs' already-existing ID. Make sense?
  15. Haven't come across anyone making a script like this before so I did it for myself ....... finally got around to making a basic script to back up Plex's databases (only). I've always had Plex's appdata folder excluded from CA Backup because I didn't want 100000000000000000's of artwork/info files to be backed up, but I realized that it would be prudent to at least have the databases backed up (library contents, play history, etc). Artwork and info can be recreated when needed. So this is what I quickly did and suits my personal needs... #!/bin/bash mkdir -p "/mn