Jump to content

JonathanM

Moderators
  • Posts

    16,737
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. You either need to improve cooling or change the warning threshold. It does you no good to get repeated warnings on something you are not going to fix, it just keeps you from acting on other things that need to get fixed ASAP. If you can't improve cooling, just make sure the temperature swing for any given drive is as small as it can get. Don't allow drives to cool down to room temp and repeatedly get hot, better to keep things constantly warm than allow wild swings. In any case, you must configure the warning temps so it only alerts when there is an actual failure, like a fan going bad or excessive dust buildup. Allowing it to keep crying wolf is very bad.
  2. The SATA part of the adapters, not the 4 pin ends, are the real issue there, lots of high current with wires very close together. Molding errors or just tolerances over time with the manufacturing equipment can cause shorting issues leading to excessive heat and possible fire. Less risk with a 4 pin to sata than there is with a SATA - SATA splitter, SATA connectors are just poor designs all around. But yes, getting connectors from a manufacturer with good QC is key, being able to see each wire at the SATA end is a bonus. IDC vs molded. Molded on both ends of a SATA splitter is just asking for trouble. I'm not saying all molded cables are bad, it's just much easier to ignore or hide bad manufacturing.
  3. Just to be clear, you mean there is a second copy of everything important not on Unraid, and Unraid is keeping a duplicate for you? Unraid by itself is not a backup.
  4. Yeah, I figured your nym implied a rather warm climate, but at least for a few months out of the year you get the benefit. Unfortunately the last time you really could have used the extra heat you had no electricity, so there's that. In your climate, solar panels FTW.
  5. NO. External disk enclosures are only viable if the disks all have a unique full bandwidth path to the main system. Either a unique ESATA cable PER DISK, which I've personally never seen, or SAS cables that give full bandwidth using fewer cables. Because Unraid requires talking to all disks simultaneously for parity to reconstruct data, anything that shares communication for multiple disks like port multipliers and USB enclosures is going to be very bad. SAS is the only "low budget" method I know of that works. I'll bet motherboard. I hardly ever see CPU failures unless physically induced by bent pins or lack of proper cooling. It's usually the power conditioning circuits on the motherboard that give up.
  6. Plus, another couple months and the power isn't "wasted" anymore, it's just a low powered space heater. Granted, it's not as efficient as a heat pump, but at least you are getting all the use out of the KWH. The cooler your climate, the less overall server consumption actually matters, just put the "waste" heat to good use and keep your office cozy.
  7. That has not been the case for a while now. You can have multiple pools, each with their own filesystem, so you could have a single XFS disk pool, a single BTRFS disk pool, a RAID0 BTRFS pool, a RAID1 BTRFS pool, etc.
  8. If your BIOS has the option, the USB keyboard emulates a legacy keyboard. That's what I wanted you to look in your BIOS for USB legacy options.
  9. Unless all the brand new drives happened to be in the same box that fedups kicked across the warehouse. Never source multiple drives at the same time if you can help it.
  10. Not relevant to keyboard legacy support.
  11. Check for USB legacy support in the BIOS.
  12. You will need to disable things one at a time to narrow it down. Start by bringing up the local GUI to watch the status and disconnecting the network cable. Next increment would be disable the docker and VM services, not just stopping the containers and guest OS's. Since it's happening every 30 seconds it shouldn't take long to rule things in or out.
  13. To be fair, most of the issues we see here on the forums isn't technically limetech's software. In your AMD example, the best limetech can do is pick the least buggy version of the drivers provided. When they update to the latest driver from third parties, who knows how it will play out. If you document the issue and it's solvable by rolling back the AMD code, that's what will happen. Hopefully all these sorts of issues get caught early in the rc cycle and get ironed out of the full release.
  14. Anyone here use bubbaraid? It was a repackage of Unraid that added "all the tools". Quite useful as an adjunct to Unraid back in the day.
  15. I'm guessing the 13TB was successfully applied at some point, and it's programmed to not allow reductions in size for data loss reasons. If you reduce the size of a vdisk, it's entirely possible to corrupt it if the space reduction discards a used portion of the image. I suspect you will need to manually reduce the size using the command line, or back up the VM using windows image backup or some other partition aware backup like acronis, create a new VM with the size of disk you want and restore your backup.
  16. The screenshot you posted for sonarr shows Container Path : /data Host Path : /mnt/user/appdata/data/ If the full pair of paths don't match between the applications, they can't find the files.
  17. diagnostics and a brief description of the hardware involved (controller, enclosures, etc) would be helpful.
  18. When you assign multiple containers to the same network Then port mapping no longer applies, as you found out. You must change the listening ports in the app itself, not in the container configuration. How that is accomplished is unique to each application, so you will need to find documentation for each app that needs to change. For instance, googling sabnzbd change port
  19. It's nice having a little margin so if things don't go to plan you aren't left holding the bag. How often has it been called into action? The issue I typically see is that people test under the best conditions, then when things actually happen it's not as clean and pretty as your test scenario. For example, you forgot you had a long running file copy open in a terminal, so the timeout waits much longer to kill things, or if you are there onsite watching it you frantically try to log in to close things down, all the while the battery backup is beeping faster and faster. The wifi access point you were connected to is on a different battery backup, just a small one, and it dies as you are logging in with a laptop that hasn't been charged, and is taking a painfully long time to boot up, your primary PC is already shut down, and you can't find a patch cable long enough to reach the stepstool you are balancing the laptop on. Having an extra 5 minute cushion can mean the difference between a clean shutdown and a 24 hour parity check. Can you tell I speak from first hand experience? 🤣
×
×
  • Create New...