Jump to content

JonathanM

Moderators
  • Posts

    16,684
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. 30 in the parity protected array. The cache pool can hold 24 more.
  2. AFAIK, the login attempts can continue until the syslog fills up RAM and crashes the server. The webGUI MUST be secured behind some secondary security, it must not be directly exposed to the internet or a network where devices aren't reasonably able to be trusted.
  3. Only if you write new files to the user share. Files can be written directly to the cache drive by several different methods. What type of files are sitting on the cache drive that you think should have been written to the array?
  4. Because of how unraid implements parity, staggered spinup is not able to be implemented without major design changes which are unlikely to be made. When you first power up your system, the controller is in charge of spinup, and can manage it. However... after unraid is in charge, the drive controller no longer has any say in when drives are spun up or spun down. If you implement spin down for drives that have not been accessed, which is one of the major selling points of unraid, then when unraid gets a disk access request it will spin up the drive asap. This is all fine and dandy when your power supply is properly sized for the number of drives that need to be spun up at once, which is ALL of them. Picture this. Your array is idling, only one disk is spun up and reading, and a read error occurs. The first thing unraid does is recreate the data that should be in the spot that errored out from parity. That means ALL the parity array drives are immediately asked to spin up. If your power supply is not able to handle that, you will get multiple read errors, and possibly multiple disabled drives. So.. here is the TLDR; You either need to size your power supply to be able to spin up all your drives at once without any stress or issues, OR disable spindown so the only startup surge is on initial power up and can be managed by your controllers. Since most people that use unraid want to spin down unused drives, the normal advice given is size your power supply appropriately.
  5. That includes /mnt/cache to /mnt/user as well. Some people don't realize /mnt/cache is a disk share.
  6. +1 I use the docker commands to automate this. Super easy to script. My home theater VM is scripted so when it's in use, this container is shut down.
  7. None of the above. All port forwarding has to be done inside the VPN. You must figure out how to get a port open using the torguard settings, then set that port in deluge's settings. Nothing should be forwarded from the container.
  8. It would be a good idea to do some research on that specific server configuration and see if it was set that way for a reason. It's possible you could run into overheating if that configuration doesn't have the airflow or ducting rated to move that much heat.
  9. BTRFS is "different" when it comes to pool sizes and calculating disk space. Try plugging your numbers into here and see what you get. https://carfax.org.uk/btrfs-usage/
  10. Is it possible you have redundant or multipath connections to the shelf? If the same device shows up twice, unraid isn't going to be happy.
  11. Given the totallity of what you are trying to do, I strongly recommend leaving the old config completely alone, and just purchase another pair of reasonably sized drives, probably 8TB. The advantages to keeping your old drives as backups, having much faster and newer drives in the new system, not dealing with possible partition layout issues, etc, etc, makes much more sense than juggling data around on the current drives and risking data loss.
  12. That was what Benson was addressing. All drives damaged at once implies the power leads were reversed, and yes that would permanently kill the drive circuit boards. If the drives work in another system, then that's not the issue. Couple different scenarios are fairly common. 1. Modular leads from a different power supply are used, there is no standard so it's possible that what physically fits is electrically wrong. 2. User jams 4 pin power in backwards, the keyed connectors aren't rigid enough to prevent forceful wrong insertion. Since you say the drives are working in another system, that's not likely the issue here.
  13. He's talking about a mistake hooking up the power, switching the 12V and 5V leads so the drives are fried. Have you checked to make sure the drives are recognized in another machine? I'm not talking about reading the data, just checking to be sure they show up as hard drives.
  14. Pretty sure @Squid's jealous, as his tagline was "What do I have to do to get a warning point around here?" or something like that for a couple weeks.
  15. After nearly a couple thousand conversions, my history is not working any more. I tried to reset things by renaming the history.json and completed jobs folders, but that just made unmanic upset, and no current history showed up. Is there a way to start things over properly without blowing it away and setting up from scratch? Or should I just wait for the next beta where history is rolled into the database?
  16. I would say put it in your signature, but with last year's forum update sigs are pretty well screwed, so no joy there.
  17. Just to make this clear, unraid has the latest version of memtest that is licensed for free redistribution. The new version on that website isn't available for unraid to package on the boot drive.
  18. When it arrives, I would contact LSI to verify authenticity. I suspect it's a counterfeit card.
  19. I think it's entirely possible that you ran into a corner case, where a failed second parity drive is assumed to be replaced by the same size or larger, regardless of the largest data drive. Maybe restarting the server with valid single parity would have reset it, and simply starting with no drive assigned wasn't enough. Diagnostics collected immediately after the event would have been valuable. If 6.8.x rc wasn't hot and heavy right now, I'd ping Tom and see if he could reproduce. Maybe after things calm down and 6.8 is out this should be revisited, to test if it's possible to reproduce. Should be pretty simple, build an example STATE A array as you outlined in your first post, swap parity1 for larger drive, kill power on hot swap bay for parity1 to simulate drive failure, reinsert original STATE A parity1 drive and see what happens.
  20. Keep in mind the linux and open source community is playing catch up with the new hardware, so if it doesn't work right now, there is a good chance it will work in a few months. This is an ongoing thing, as hardware manufacturers really don't care if their hardware isn't compatible with linux, the only thing they bother with is windows for their consumer grade stuff.
  21. Problem is, it only effects a subset of hardware. So there are thousands of unraid installs working fine. Every time there is a kernel or hardware or driver change, it can take some time to work out the issues that can come up. The workaround is indeed to use a version of the software that doesn't cause issues. Limetech makes every effort they can to ensure everything works perfectly for everybody, but that ideal will never be reached, because the goalposts are constantly moving.
×
×
  • Create New...