veruszetec

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by veruszetec

  1. Just had a weird issue - resolved with a reboot, but was very scary - posting here in case anyone else saw the same and needs a sanity check or solution.

    Woke up today, saw there was an UD update, and installed the update. An hour later, my server is reporting that it can't find my license key, or anything else on /boot - everything throwing FAT read errors for /boot and sda1. Remounting the USB drive didn't do anything.

    After some investigation - I see that UD has taken over the usb key and re-identified it as /dev/sdw - no idea why. I no longer had a /dev/sda or /dev/sda1. They simply did not exist anymore.

    Rebooting resolved the issue - but my understanding is this should never have happened in the first place - I have the auto-mount USB devices option disabled, and no actual hardware changes were made - this appears to exclusively be the result of me installing the latest UD plugin.

    I'd offer to share logs but they've rolled over and don't have anything useful. Hopefully this was a one-off...

  2. So, title. Just went through my second samsung doublechecked, sandisk ultra fit drive in a single year. obviously i have something writing to the drive frequently, but i can't seem to catch it in the act with lsof. I have a new drive in there now from a no-name brand that's hopefully better, but I need to track down the offending log or whatever.

     

    Anyone have any ideas how to track down what's killing my drives? Any bash scripts or custom commands?

     

    (also if staff could prioritize my key replacement email/ticket that would be super helpful, i'm dead in the water atm and can't start the array, partially making this post just for visibility since the replacement key thing said up to 3 days)

     

    Thanks!

  3. Hey folks, I recently had a norco backplane fry on me, taking out 4 drives (and another 2 used to troubleshoot).

     

    I fixed the issue (bad molex in backwards) but the backplane is dead. I ordered a new one from Norco's official reseller, ipcdirect.com, but it has yet to ship and all emails are being ignored. Phone number on both norco and IPC's websites are both disconnected.

     

    Does anyone know how to get in touch with Norco?

     

    Alternatively, does anyone have a spare RPC-4224 (BP-001) backplane they can sell?

  4. Experiencing the same "Not Available" after having to manually reconstruct my docker templates post-corrupted-bootflash-disaster. Very possibly could be something on my end but seeing someone else report the same makes me think otherwise.
    image.png.0c18ed5fe9ae99d0592f3ba3ce5b00d8.png

     

    Edit: I waited a bit and performed a check for all updates and it went away. Oh well.

  5. Hello,

     

    My flash drive managed to corrupt itself and unfortunately i didn't have a backup. I reimaged it fully expecting to have to reconfigure my server (and copied over the non-corrupted files from the drive, looks like i mostly lost dockers/VM XMLs) but when I booted back up, four of my disks (one backplane) are not showing.

     

    Info you need to know:

    I have three controller cards:

    2x 8 port

    1x 4 port


    The 4-port one is the one that isn't showing any drives. I don't believe the controller is the issue because when I swapped the backplane it was connected to, the other drives showed up. I also tested that backplane on another controller. I've already replaced the cable and the issue persists.

     

    Diagnostics attached.

    memoryalpha-diagnostics-20200515-1607.zip

  6. I waited for the full release of 6.7.0 to come out before upgrading from 6.66, where I didn't run into this issue. Since upgrading a couple of weeks ago, I started getting calls about streams crapping out. I've never had these complaints in two years of running this server, and the server is far from resource-starved at 64gb of RAM and 2x8c/16t Xeons.

    I, too, have tracked the issue down to the mover. When the mover is running, the entire system's performance grinds. Today, I had to wait about 2m for radarr to even load it's GUI - stopping the mover instantly mitigated the symptoms of the issue.