Jump to content

JonathanM

Moderators
  • Posts

    16,737
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Do you have a strong desire to see the server as a single mount point on your client machines, or would you rather see different categories, with possible permission differences where 1 person may only have access to TV shows and Movies, and another user have access to the documents share?
  2. As long as your unattended shutdown can happen in 2 minutes or less, you should be fine. I'm not sure I found that exact unit, but an 850VA APC I found specs on is rated for 6 minutes at 300W, so if you wait 1 minute before starting shutdown, you need to be done and powered off before the 3 minute mark. It listed typical recharge times of 12 hours, so if you draw down to 50% you need to wait at least 6 hours before waking things back up. That's running a little close for my taste, but everyone's risk tolerance is different. I recommend dry runs to see how things act. Keep the server plugged in where it is, so it doesn't lose power during the test. Find a way to switch the input power to your new UPS without unplugging it. Switched outlet, power strip, whatever. It's vitally important the ground stays connected the entire time. Plug the USB communication lead into your server, verify it is detected and monitoring properly. Find roughly 300W of load, like a small space heater on low, or something similar. Old school halogen floodlamps are good too. Plug that into the power protected outlet of the UPS, verify the server is showing the load in the UPS monitoring. Turn off the supply to the UPS, and observe the server. Don't intervene, just let it think the power is down. Time how long it waits to start shutting down, and how long the process takes. Verify the dummy load on the UPS has stayed stable, no blinking lights or abnormal heater fan speed variations. When the server powers off, turn off the dummy load. Leave the power turned off to the UPS for a period of time, to simulate prolonged outage. Power up the UPS if it turned off, turn on the dummy load, boot the server. See how the stats look for charge percentage and such.
  3. Depends on equipment you have. The most direct is WOL hosted in your router, combined with a VPN hosted in your router.
  4. For the purposes of this type of conversation, no need to bring up Unraid at all. If they ask, they are just linux XFS or BTRFS formatted drives, whichever applies. If you are trying to use the drives to recover other drives with the help of parity, then the conversation gets a little more complex, but if you say you need a binary perfect copy of the drive for possible deleted file recovery, that would cover it.
  5. If you are serious, I wouldn't mind seeing if I could get any response out of them. Message me if you want to arrange something.
  6. Since the app controls the listening IP, you should simply change the IP in radarr itself.
  7. Yep. Power regulation for an unattended machine generally needs about double that, for a modest server. The little (insert western currency here)100 units are generally not up to the task of keeping a server healthy. The local big box warehouse club sometimes has had 1500VA units on sale for US$99, but I'm afraid inflation has moved that marker significantly upward.
  8. Hardware RAID is difficult to integrate with Unraid, often many of the error reporting and monitoring functions can't be passed to Unraid. Some RAID cards can be reflashed to IT mode to allow use as a standard HBA, but it's easier and cheaper to just use a model that defaults to HBA mode.
  9. Two different things entirely. The data that was on the physical drive is being actively recreated using the rest of your data drives and your parity drive. You can interact with the emulated disk just like the physical disk, which is no longer responding, likely completely dead based on what you've posted. At the moment you are at risk of losing all that emulated data, plus the data on any other disk if it fails. You need to rebuild the emulated disk back to a tested good physical disk as soon as possible to get the array back healthy, to a state where a drive failure can be emulated as it is right now.
  10. Have you measured your power draw at the wall when you start a parity check? You need to make sure whichever model you choose can comfortably handle your max draw for several minutes, in case there are processes running that take some time to exit cleanly. Find your max VA draw, then compare that to the time / power curve provided for the UPS models you are considering. Be mindful that in order to be kind to the UPS batteries, you need to draw down to less than 50% capacity. Also keep in mind recharge times, typical is 10 to 20X outage duration. So if you run on batteries for 5 minutes, you may need nearly 2 hours to be fully charged for the next event, if you have multiple outages in rapid succession.
  11. If you are talking about containers, then the issue is probably path mapping. Containers only see what they have mapped, like a totally separate computer would. If you map /mnt/user/downloads to /data, then the container sees everything in /mnt/user/downloads in its /data folder. It can't directly see anything in /mnt/user, only the sections that you map, and then only at the paths you designate on the container side. If multiple containers need to see the folders identically, then the mappings need to be identical for those folders. Mapping /mnt/user/downloads to /downloads in one container and /data in another won't work, the files will be there but one will see them in /downloads and the other in /data.
  12. Did you read the reviews on that page? A large number of them complain about it not being properly flashed.
  13. Did you flash it to 9211, or was it supposed to be flashed already?
  14. Because there is no single best way to do it. There are multiple competing strategies on what works best, so my advice is keep researching and finding out why there are different opinions and find what works for your specific setup.
  15. You may get better advice from someone else, but if it were me, I'd back up everything currently on the pool, remove both devices from the current pool, blkdiscard both /dev/* members, being VERY careful to select the correct disks, then recreate 2 pools how you want them. Given how fiddly BTRFS can be, add encryption issues on top, and I'd be making backups of backups when using an encrypted BTRFS setup even on a normal basis, nevermind when trying to reconfigure things. If something goes wrong, file system recovery becomes almost impossible when encryption is involved.
  16. 1. Unraid tracks drives by serial, not physical location, so you can freely move drives around physically to different ports, Unraid won't change how they are assigned to their logical slots, so put the drives in any physical slot that makes sense to you. 2. Since Disk4 (E) has no data AND you are building a new parity drive, nothing particularly fancy needed for this. Take a screenshot or some other record of which serial numbers are in which logical slots. Power down, physically remove the old parity drive and and the 1TB drive, install the new parity drive, rearrange the drives the way you want in the case, power back up. Tools, new config, preserve current assignments ALL, apply, go to main page, drop down the parity slot, VERIFY the serial number of the new parity drive, CHECK IT AGAIN. B and C can stay where they are, since D is empty I'd leave it out as well as E, you can add D back later if you really need the space, better to leave it out, less power and less risk. Verify you have the correct serial numbers for the new parity, disk1 and disk2, at this point it should be new drive in parity, B in disk1 and C in disk2. Start the array and build parity. After parity builds, do a non-correcting parity check. If you have zero errors, you can proceed. Stop the array, power down. Replace the 2TB disk with the old 4TB parity disk, power up, select the correct disk for the disk2 slot, and rebuild. Do a non-correcting check. At this point, you should have your new 4TB parity drive, the original 4TB disk1, and the old parity drive as disk2 with all your data, and 2+TB free space on disk2. If you need the space NOW, then add back the 2TB you designated D, and optionally the 2TB you just replaced with the 4TB as well. Keep in mind that each extra spinning drive is a possible point of failure, since parity requires ALL remaining data drives to be read perfectly end to end to rebuild a single failed drive, even if some of those data drives don't have any data yet. You would be better off not putting those older drive back in the array if you can help it.
  17. Bad RAM is one cause of garbled display, especially with iGPU. Download the latest version of memtest86 to a different USB and run a few passes and see what it looks like.
  18. Unclear what you are asking, specifically. Did you google avahi and read what it's used for?
  19. The container has this https://discord.com/invite/TnwYRPKg72 as the official support location for this container. Maybe ask there?
  20. Too many variables to make a blanket recommendation for every situation. Do what works for you.
  21. Try different browser / incognito mode.
  22. Personally I like to run a little closer, within reason. I like to keep between 1 and 2X the size of my largest data drive free, so in your case with 6 equal data drives I'd want between 66% and 80% used, and add or upgrade drives to get back into my sweet spot. Extra spindles = heat, energy, failure risk
  23. Do you have notifications set up? Do you get daily status reports from the server? Can you act on any warnings promptly? If yes, cold spare. If no, 2nd parity, and work on the notifications aspect. A second parity could be useful to buy some time in certain failure modes, but with only 6 data drives it feels wasteful to me to keep it powered.
  24. When was your last successful run of memtest86? BTRFS errors are often an indicator of memory issues.
  25. No personal experience with your proposed combo, but this thread seems to indicate you should be good to go. The OP used a different card and disk shelf, but others chimed in on the thread, some with plain LSI cards and the model you posted, so it looks good to me. https://forums.unraid.net/topic/89444-how-to-configure-a-netapp-ds4243-shelf-in-unraid/?do=findComment&comment=845904
×
×
  • Create New...