Jump to content

JonathanM

Moderators
  • Posts

    16,684
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Probably doesn't matter, except it may interfere with stopping the array. Personally I'd try stopping the array and see what happens. If it won't stop, look at the process list in the console and kill any dd processes.
  2. If you had disk shares enabled, then yes, the disk share shows up emulated. The contents are emulated regardless of the status of disk shares though, they show up exactly like nothing happened, assuming the drive dropped cleanly and parity was currently accurate. If parity is out of sync when a disk drops, the contents will be corrupt. Zero error parity checks are critical.
  3. If the data is of the sort where it's not very important right now, but you could see a future where you want to spend the time to dig through it, then pull the drive and replace it. Keep the original, then copy the usable files off the rebuilt copy in the array to other disks and format it XFS to start the migration process.
  4. Don't do that. We won't get notified if you update an existing post. Create a new post in this thread with the diagnostics attached.
  5. It would probably be better to simply remove the unneeded drives and rebuild parity than to struggle with this method. Are you afraid one or more of your active drives that you plan to keep is failing, and need to keep parity valid in case it dies?
  6. Because manufactures can't be bothered to be consistent with how they use and format SMART, reading the reports is a cross between analysis and divination. If you come across something that looks off to you, attach the report and ask for help interpreting it.
  7. Parity check before and after the operation is completed. In either case anything other than 0 errors is an immediate show stopper. Also, check the smart reports on all drives before and after.
  8. No. If the container has its own IP, then all ports are open and available. You need to change the listening port in the app itself to 80, which may not be possible if it's running non privileged.
  9. I doubt I can help with the issue, but just to clarify, the server stays on, and starts the shutdown process immediately, right? Or are you saying the server loses power as well? If the power is staying on, then you may be able to get around the issue by changing battery level and runtime percent to 0, and put a rational amount of time to stay running in the time on battery field. I prefer to shut down based on how long the power is out anyway, I don't like running the batteries down on the UPS, and if the power is out for more than a minute or two, it's probably down for a couple hours, so may as well start the shutdown procedure as soon as practical.
  10. Unraid parity has no concept of files, it only reconstructs a missing device in its entirety. https://wiki.unraid.net/Parity#How_parity_works
  11. Replace the drive with a known good drive of equal or larger up to the size of your smallest parity drive. Logical drive slot removal is not implemented in unraid, it must be done manually. Only adding or replacing drives is automatic.
  12. Yep. What I would do is run through the scenario of what sequence you would shut down if you knew power was going to be cut dead in exactly 30 minutes. Keep in mind the whole network infrastructure must stay powered the entire time until every managed device is successfully shut down, and the server broadcasting the info must stay up until all those devices have committed to shutdown. The cool thing about managing things this way is that it's easy to test your plan. Plug the signalling UPS into a power strip so you can cut the power without unplugging it, transfer all the loads on it to another power source, turn off the power to the UPS and watch your automation in action. Hopefully it all gracefully shuts down while sending you status messages along the way, but if it doesn't you can troubleshoot. With a single broadcast controlling everything it's easy to stagger shutdowns so each process can be monitored.
  13. I would not use time or capacity remaining as my benchmark for shutdown. Unless you have alternate major power available like a back up generator, it's much better for the equipment to get things shut down in an orderly fashion ASAP after you determine it's not a minor blip. Around here, if the power is out more than a minute or so, it's going to be out much longer than my battery backups can handle (roughly 1 hour) Here is how I work things. All my hosted VM's and various other non critical machines start shutdown at Power loss + 3 minutes. The servers start staggered shutdown based on priority about 3 minutes later. The main server starts shutdown at power loss + 10, hopefully everything is shut down in an orderly fashion by power loss + 20. That leaves roughly 60% capacity left in my batteries, which for typical SLA that are in my UPS is about optimal for long life. Running those types of batteries below 30% remaining or so is very hard on them. Also, when the power returns, I'm confident I can start the equipment back up pretty much right away, if you drain the batteries below 50% you may not have enough charge if the power goes out again. It takes most battery backups about 10X the outage time to recharge, so if you run on batteries for 30 minutes, you probably won't be at full charge until 5 hours or more.
  14. Not really. You can set up all your critical services on a single SSD, assigned to the cache slot. You don't mention already having one, and they are relatively cheap right now. I know you say no extra investment, but a SSD cache is rather critical for a smooth experience with unraid. So, to get unraid up and running, all you need is an SSD around 250MB+ and a regular hard drive, size not particularly important. Assign the SSD as cache and the HDD as disk1, and you are off to the races as far as setting up a test environment. All your current drives with important data can stay disconnected while you play.
  15. Have you actually visited that link? It tells you if you are not properly connected through privoxy, and gives instructions on how to fix. I think that link is the ideal thing to put there.
  16. @binhex, sounds to me like that address is what should be put in the WebUI field.
  17. Not sure why that's confusing. Unraid only directly manages the parity protected array and the cache pool. You can use any number of additional drives as individual or manually pooled devices, they just won't participate in user shares. You can still manually share them, write to them, whatever.
  18. Don't do that. If you really need to switch to a newer USB because you are afraid the old one is failing, I recommend the Kingston SE9 series. There have been a spate of USB failures recently with tiny USB drives, my speculation is that the heat buildup in a tiny drive is too much and prematurely kills the drives. If you feel you need a tiny drive to reduce the vulnerability of the drive to physical attack from objects, it's a much better strategy to get a USB header adapter that will allow you to mount the drive inside the case, totally out of the way. Transferring the license is easy now, just boot the new stick with the old key file, it will walk you through an automatic key transfer process. That can be done once a year without any hassle, sooner than a year and you will need to get Limetech involved to manually reissue a key. Your points 2,3,5 are going to be more complicated. They may deserve their own topic kind of complicated. 2. https://nextcloud.com/groupware/ I run a nextcloud container on my unraid, but I haven't attempted to set up the contacts / calendar / email personally. 3. I googled a little, came up with https://www.turnkeylinux.org/domain-controller which says they provide a ready to run virtual disk which can be used with KVM qemu, which is what unraid uses for VM implementation. I have ZERO experience trying to implement something like that. I run all linux based daily driver stuff. 5. I run a PfSense VM, which provides all that and much more. It's complicated though, and as you already know, you lose your network when the server is down unless you make other arrangements for failover. The rest of your issues are pretty straight forward, you've probably already got them covered with minimal searching.
  19. Or, do it the non-destructive way and cover the pin(s) with Kapton tape, which is made for this type of application.
  20. If you can't get the existing machines to talk directly to the UPS, you do have other options. Perhaps you could use a raspberry pi to talk directly to the UPS and then relay the information over the network, or you could get a very cheap UPS that will connect to the server with USB, use that to do your shutdown timing, and don't even plug anything critical into the power portion.
  21. Best to ask on the memtest forum.
  22. The cache pool is somewhat misnamed, as it is a holdover from early iterations of unraid. It would be more accurate to call it an application or VM pool, the caching function is seldom used when filling out a full complement of drives. It's more of a higher speed single volume space vs. the parity protected slower individually accessible array drives.
  23. The cache pool can run any valid BTRFS RAID level so it is protected as well. Some BTRFS RAID configurations are more stable than others, due to limitations with BTRFS, not specific to unraid.
×
×
  • Create New...