Jump to content

JonathanM

Moderators
  • Posts

    16,706
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Taking your thread title at face value, I recommend keeping a full up to date backup of your 1TB data drive on your 2 500GB drives, preferably disconnecting and storing them in a safe spot between backups.
  2. What speed is your RAM running? The maximum speed your processor and board can handle may be well below what your RAM is specced to run.
  3. What is probably happening is that the vdisk file is being created on the cache drive, and when it tries to grow as you run the install it's filling the cache drive completely and failing. vdisks are created sparse, which means they only take up the space that's actually in use inside, but appear to the vm as having the full defined capacity available. BTW, filling drives to 100% is a good way to corrupt the filesystem and lose all the data on the disk. Don't do that. When defining a VM, best practice is to only size the virtual disk to partially use the free space available. While it's perfectly valid and sometimes useful to "overprovision", it can lead to nasty consequences if you don't manage the free space carefully. For example, I routinely provision more than 5 100GB VM's on my cache drive which only has 500GB total capacity, and generally stays about 1/2 full. Each VM thinks it has access to 100GB, when in reality it's only using 30 or so GB of space. If I start filling any of the VM's near capacity, I would crash them all if I run out of space, so I monitor the free space and deal with it before it gets anywhere close to filling the cache.
  4. "should be", but it usually ends up much slower.
  5. New config only erases data on drives assigned to parity slots, unless you change the desired format type. As long as you don't format any drives, all the data will remain on the drives you assign to data slots.
  6. @GyroDragona, hint, hint development is much easier if the dev can actually use the software in question. Maybe pass the hat among your game group and see if you can get ich777 set up with a license or whatever he needs to use this? BTW, I don't know ich777, apart from seeing the work he does here. My opinions are my own, and it's my opinion that it's hard to code for something you can't personally test.
  7. Your maths may be correct, but I think you need to redo with a different set of numbers. Subtract the base cost of running from both power equations to see how much it actually costs to transcode that file. So, allowing for a base load of roughly 50W, you are using an extra 135W for 3.5 minutes or 28W for 25 minutes. The system pulls the same base load regardless. I don't think the throttling operations are intelligent, they likely cause way more work than if you just let the CPU do its job. Power throttling is ONLY for heat control, not energy efficiency. Energy efficiency is gained by reducing the number of resources powered up constantly, as long as the system can properly bring them back online in a timely fashion. There are ways of turning off unused slots and controllers, you can use fewer high capacity drives for massive energy savings, and Unraid can spin down unused drives. You should use the fastest processor available in a specific die class, the highest density RAM chips, and the largest hard drives. That will allow the most work to be done with the least amount of electrical use. However... the amount you spend up front for the parts will likely never be recouped in energy savings over the life of the build, so you have to strike a balance over how much your time is worth vs. initial parts cost vs. long term energy use vs. overall environmental impact from producing new parts. Typically buying higher end means better environmental impact, because it takes roughly the same amount of raw materials and labour to produce each physical unit, so the fewer units you consume over your lifetime the better. It's more money out of your pocket, but that's just the way it is right now. Environmentalism plus consumerism = $$$$. Toys, Environment, Money pick any 2. Bottom line, TDP is a very poor way to measure efficiency, as it was never meant to be used that way. It's only use is packaging concerns for heat load over time, with heat sink design.
  8. Pick an earlier tag from this list. https://hub.docker.com/r/linuxserver/grocy/tags?page=1&ordering=last_updated
  9. At the current time, it's all or none. There have been feature requests made to change that, but nothing so far.
  10. If you choose to encrypt your disks, be very sure that you have backup of any important files. Encryption makes recovery from a corrupt filesystem practically impossible. Note, I did NOT say you are more likely to suffer corruption.
  11. Parsing the syslog is what I had in mind. Start a non-correcting check, watch the syslog for parity errors. Restart the check at the first error sector, if error is identical, force the drives to flush any on drive cache and restart again at the error. If the same sector errors, run a correcting check for just that segment, then start another non-correcting check at the same spot. If the errors don't repeat on rechecks, don't correct.
  12. Why no sector number? My use case would be to repeat a failed sector and see if it still fails, if so, flush disk buffers and try again. Perhaps we could figure out a way to better differentiate between a flaky data path (controller, RAM, drive) and a genuine bit error that should be corrected. If a correcting parity check could be initiated ONLY IF and ONLY ON sectors that repeatedly pass the same bit error a specified number of times, it would make more sense to me than blindly writing possibly random parity. Yes, that would make correcting checks take much longer, but only on the incorrect sectors. You could also put in logic that could error out the correcting check if 100% of a configurable number of sectors were consistently wrong, and prompt to do a parity build instead. As long as parity is not shown as fully checked when there are possible out of band modifications, I think you are fine. IOW, I wouldn't want to think parity was checked fully intact if the server was powered down between partial passes. If the array is stopped, the parity check percentage should reset. If you have it set to incrementally check 25% each day, but stop the array and restart it, I don't think that should count as having checked 100% of parity until you have 4 consecutive checks that weren't interrupted by an array stop. This is more of a reporting and array confidence thing than a data loss scenario though. I can see someone thinking it would be ok to only check 10% of parity each month and expect to have a flawless rebuild, when in reality that could mean they haven't fully checked parity in almost a year.
  13. Probably. The way Unraid uses parity protection, your data is only as safe as your weakest drive. All the sectors of all the drives are needed when reconstructing a failed drive, so having a known bad drive included in the array even though it's still "working" at the moment is a bad idea. Imagine this scenario... you purchase a pair of brand new drives, one for parity, one for data1. You decide that your gaggle of old drives are good enough, they haven't completely died yet, and what are the chances of two dying at the same time, right? Parity will allow you to rebuild to a new drive if one of the old ones fail, so you feel safe. Until one of your brand new drives decided to fail, and that one weak old drive decides it can't handle all that stress of rebuilding, so you just lost both drives worth of data. The parity check is a good tool to keep up with the health of your drives, if something feels off, like it did with that failing drive in place, you need to figure out why. If a parity check won't complete error free in a timely fashion, a rebuild of a failed drive won't either. Also... not all drive issues are really disk failures, MANY times it's connections or power.
  14. Honestly, the best answer is for lsio to start rolling a custom build with a consistent tag, that has base OS security updates as necessary but keeps 5.14.23 version of the Unifi software. I know this is extra work, but since unifi can't be bothered to do proper releases with testing and such, we need a container image that stays as current as possible while holding at whatever unifi app version that is deemed worthy by a majority of actual users. Continuing to use a container with an outdated base OS layer could become an issue, especially since some people can't seem to figure out how to properly run this in bridge mode which keeps port exposure to the minimum functionality required. Currently we have LTS and Latest as rolling upgraded base OS, where LTS is currently holding at 5.6.42 and Latest tracks new unifi releases. I propose adding another, "Community_Stable" build which tracks the consensus for unifi version and keeps the base OS layers updated. When a newer version is deemed worthy, and can be migrated to without breaking things, the app version can roll from 5.14.23 to the new community blessed version. When that change is about to happen you could then specify the last 5.14.23 build tag and hold there if you wanted. I recently went through the process of moving from LTS to 5.14.23, and it was a little stressful, as it wasn't clearly documented what minor versions you needed to use to get from A to B. Hopefully we can work to eliminate that sort of thing going forward.
  15. That's the issue. Use a newer tag. https://hub.docker.com/r/linuxserver/sonarr/tags?page=1&ordering=last_updated Look to the far right of the build you want to use to find the correct tag after the docker pull.
  16. Depends on your situation. I personally run OpenVPN hosted on a pfSense firewall VM. If you don't have a firewall / router with decent CPU power, you would probably get better performance with wireguard hosted on Unraid. That question doesn't have a clear cut answer, there are so many variables, including the range of clients that you need to use.
  17. 95% of the UAP's under my management are outside this specific LAN, so I choose easier access over setting up a large number of router based VPN's. I guess technically I could keep the management interface portion closed, but like you said, it's a choice. I just passed through all the ports needed for the UAP's to talk to the controller and also the ports needed to manage the controller and guest portal. I figure if UNIFI publishes a list of ports to open, they are fairly confident it's ok to open them given proper password protocols. https://help.ui.com/hc/en-us/articles/218506997-UniFi-Ports-Used
  18. Yep. I assume the built in authentication with a strong password to be secure enough. Do you have evidence or hearsay to the contrary?
×
×
  • Create New...