Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. I seem to recall seeing issues with POP_os VM's elsewhere on the forums, have you searched? The normal things to try would be different BIOS types, (ovmf vs seabios) chipset emulations(440 vs Q35), different disk controller emulations, network types, emulated vs pass through cpu, less cores, less RAM, whatever it takes to get the install completed, then you can try increasing or optimizing resources later.
  2. If it looks like this in the back, some tape might help a bunch. Both fans should be blowing out, but all those holes below the fans in the slot covers (and any others) that allow air to get inside without going over the drives should be taped off. Temporarily remove the front plastic, and make sure all the openings not directly beside a hard drive are taped off. You want all the air that moves through the case to flow over the drives first, seal off anything that lets air in elsewhere.
  3. You only have one paper clip? 🤣 I'm pretty sure I've only ever seen boxes of 100+ Regardless, it may be useful to get one of the $20 power supply testers, it does the same thing as the paperclip test but gives voltage sanity readings as well. MB is prime suspect, does it have a buzzer or speaker attached? Bare MB with CPU and heatsink ONLY, no RAM, no wires connected, NOTHING else, should give beep codes indicating no RAM detected after you momentarily short the 2 power button pins. Add the RAM back, it should indicate either successful POST or video failure, depending if the CPU and MB have on board video. Keep in mind server boards can have EXTREMELY long POST routines, especially after CMOS clearing. I've had to wait literal minutes after pressing the power button to get codes, assuming the board spins up the CPU fan.
  4. You either need to improve cooling or change the warning threshold. It does you no good to get repeated warnings on something you are not going to fix, it just keeps you from acting on other things that need to get fixed ASAP. If you can't improve cooling, just make sure the temperature swing for any given drive is as small as it can get. Don't allow drives to cool down to room temp and repeatedly get hot, better to keep things constantly warm than allow wild swings. In any case, you must configure the warning temps so it only alerts when there is an actual failure, like a fan going bad or excessive dust buildup. Allowing it to keep crying wolf is very bad.
  5. The SATA part of the adapters, not the 4 pin ends, are the real issue there, lots of high current with wires very close together. Molding errors or just tolerances over time with the manufacturing equipment can cause shorting issues leading to excessive heat and possible fire. Less risk with a 4 pin to sata than there is with a SATA - SATA splitter, SATA connectors are just poor designs all around. But yes, getting connectors from a manufacturer with good QC is key, being able to see each wire at the SATA end is a bonus. IDC vs molded. Molded on both ends of a SATA splitter is just asking for trouble. I'm not saying all molded cables are bad, it's just much easier to ignore or hide bad manufacturing.
  6. Just to be clear, you mean there is a second copy of everything important not on Unraid, and Unraid is keeping a duplicate for you? Unraid by itself is not a backup.
  7. Yeah, I figured your nym implied a rather warm climate, but at least for a few months out of the year you get the benefit. Unfortunately the last time you really could have used the extra heat you had no electricity, so there's that. In your climate, solar panels FTW.
  8. NO. External disk enclosures are only viable if the disks all have a unique full bandwidth path to the main system. Either a unique ESATA cable PER DISK, which I've personally never seen, or SAS cables that give full bandwidth using fewer cables. Because Unraid requires talking to all disks simultaneously for parity to reconstruct data, anything that shares communication for multiple disks like port multipliers and USB enclosures is going to be very bad. SAS is the only "low budget" method I know of that works. I'll bet motherboard. I hardly ever see CPU failures unless physically induced by bent pins or lack of proper cooling. It's usually the power conditioning circuits on the motherboard that give up.
  9. Plus, another couple months and the power isn't "wasted" anymore, it's just a low powered space heater. Granted, it's not as efficient as a heat pump, but at least you are getting all the use out of the KWH. The cooler your climate, the less overall server consumption actually matters, just put the "waste" heat to good use and keep your office cozy.
  10. That has not been the case for a while now. You can have multiple pools, each with their own filesystem, so you could have a single XFS disk pool, a single BTRFS disk pool, a RAID0 BTRFS pool, a RAID1 BTRFS pool, etc.
  11. If your BIOS has the option, the USB keyboard emulates a legacy keyboard. That's what I wanted you to look in your BIOS for USB legacy options.
  12. Unless all the brand new drives happened to be in the same box that fedups kicked across the warehouse. Never source multiple drives at the same time if you can help it.
  13. Not relevant to keyboard legacy support.
  14. Check for USB legacy support in the BIOS.
  15. You will need to disable things one at a time to narrow it down. Start by bringing up the local GUI to watch the status and disconnecting the network cable. Next increment would be disable the docker and VM services, not just stopping the containers and guest OS's. Since it's happening every 30 seconds it shouldn't take long to rule things in or out.
  16. To be fair, most of the issues we see here on the forums isn't technically limetech's software. In your AMD example, the best limetech can do is pick the least buggy version of the drivers provided. When they update to the latest driver from third parties, who knows how it will play out. If you document the issue and it's solvable by rolling back the AMD code, that's what will happen. Hopefully all these sorts of issues get caught early in the rc cycle and get ironed out of the full release.
  17. Anyone here use bubbaraid? It was a repackage of Unraid that added "all the tools". Quite useful as an adjunct to Unraid back in the day.
  18. I'm guessing the 13TB was successfully applied at some point, and it's programmed to not allow reductions in size for data loss reasons. If you reduce the size of a vdisk, it's entirely possible to corrupt it if the space reduction discards a used portion of the image. I suspect you will need to manually reduce the size using the command line, or back up the VM using windows image backup or some other partition aware backup like acronis, create a new VM with the size of disk you want and restore your backup.
  19. The screenshot you posted for sonarr shows Container Path : /data Host Path : /mnt/user/appdata/data/ If the full pair of paths don't match between the applications, they can't find the files.
  20. diagnostics and a brief description of the hardware involved (controller, enclosures, etc) would be helpful.
×
×
  • Create New...