Jump to content

Idaho121

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by Idaho121

  1. That rules one thing out, thank you!

     

    Poking around, a lot of people who had similar problems fixed them by changing PCI settings in their BIOS. 

     

    See here and here, among others.

     

    Before I go messing around in there, does anyone have any sense as to what settings I might try? I'll start with the ones from the thread - do either seem like more likely culprits?

     

    (Also, just a side question - if I pull the two cards that have my internal HDs on them and boot, but I don't start the array, will that bork my build? With the external card and GPU, I only have 1 slot left over, and so I'd be juggling unless I just pull them until I get this working.)

  2. SOLVED - I was dumb and didn't push the drives all the way into their slots. They don't sit flush with the front of the cage - they're in about .25".

     

     

    Hi all,

     

    I had a bunch of old drives that I pulled to put 14TB drives in my box, and I figured I might as well try to put them to use. So I picked up:

    An EMC KTN-STL3

    This card

    These caddies

    (MoBo manual for reference)

     

    I threw my disks (shucked from EasyStores) into the caddies, put them in the EMC, put the LSI in my unRaid box in PCI_E6, and then connected them via this cable. The box turned on, but when I booted, nothing showed up. I looked in System Devices, and I didn't see the new card. I knew the PCI slot works because I had a GPU in there until I pulled it to make room (and switching my MSI to not look for it on boot).

     

    So I swapped the LSI I had in the PCI_E4 slot down to PCI_E6 and put the new card in PCI_E4. It showed up in my System Devices now, but it didn't show anything attached to it.

      

    I tried moving cables around into different slots in the EMC, but no dice. I had a second cable because Amazon was playing games with shipping, and swapping that out didn't help, either.

     

    Any ideas here on how to get the disks to show? They were all unRaid disks in their most recent lives.

     

    cybertron-diagnostics-20240420-2259.zip

  3. Hi all,

     

    Have a problem that started up a little while ago! Would love some help on getting it fixed.

     

    A couple weeks ago, while up and running, the Docker tab started saying that Docker failed to start. My apps were still working, though - I could watch Plex, for instance. When I tried to reboot the server, it hung on unmounting the cache drive and I needed to hard reset. Docker booted when the server booted up again, parity found 0 errors, and all was well.

     

    Then, it happened again. I deleted the Docker image and formed it again, but then the Docker shenanigans started again.

     

    I did a scrub of the drive and found 15 uncorrectable errors, and the logs said the following:

    Quote

     

    UUID: 90f78699-4bbf-4038-9e0a-169d1b9413ac

    Scrub started: Thu Nov 9 12:25:55 2023

    Status: finished

    Duration: 0:22:07

    Total to scrub: 688.59GiB

    Rate: 530.50MiB/s

    Error summary: csum=15

    Corrected: 0

    Uncorrectable: 15

    Unverified: 0

     

     

    I also had 3 types of errors, examples:
     

    Quote

     

    Nov  9 08:30:58 Cybertron kernel: BTRFS warning (device loop2): checksum error at logical 13727539200 on dev /dev/loop2, physical 14541234176, root 394, inode 22386, offset 491520, length 4096, links 1 (path: usr/bin/ld.gold)

    Nov  9 08:30:58 Cybertron kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0

     

     

    Quote

    Nov  9 11:03:18 Cybertron kernel: BTRFS warning (device sdv1): csum failed root 5 ino 7466245 off 836489216 csum 0x0175a671 expected csum 0x1de53426 mirror 1
    Nov  9 11:03:18 Cybertron kernel: BTRFS error (device sdv1): bdev /dev/sdv1 errs: wr 0, rd 0, flush 0, corrupt 2805, gen 0

     

    Quote

    Nov  9 12:44:25 Cybertron kernel: BTRFS warning (device sdv1): checksum error at logical 742226038784 on dev /dev/sdv1, physical 745455652864, root 5, inode 7009168, offset 2586972160, length 4096, links 1 (path: Media/TV Meta/TV Shows/Titans (2018)/Season 4/Titans - S04E11 - Project Starfire Bluray-1080p.mkv)
    Nov  9 12:44:25 Cybertron kernel: BTRFS error (device sdv1): bdev /dev/sdv1 errs: wr 0, rd 0, flush 0, corrupt 2815, gen 0
    Nov  9 12:44:25 Cybertron kernel: BTRFS error (device sdv1): unable to fixup (regular) error at logical 742226038784 on dev /dev/sdv1

     

    I tried a few of the files and nothing appears to be wrong (wasn't expecting there to be).

     

    Also, last night, I received a note that the Docker image (set at 40GB) was almost full. When I ran Fix Common Problems, it said:

    Quote

    Nov  9 13:46:23 Cybertron root: Fix Common Problems: Warning: Docker image file is getting full (currently 83 % used)

     

    Any help would be appreciated!

  4. Hi all,

     

    I was trying to upgrade my cache drive from a 1TB to a 2TB, but now that I've done so, my dockers are all missing.

     

    Process:

    1. Disabled dockers (and VMs, but I don't use that feature)
    2. Set everything to Cache Yes, then run mover. *I think this is where I messed up, as not everything was moved over. I poked around and it was the docker image, libvert folder, and a few folders in Plex metadata, which I didn't think would bork things
    3. Shut down, swap out drive
    4. Reboot, start array, format drive
    5. Set appdata/doman (which was missing)/system (and other shares) to Cache Prefer
    6. Run mover. *A few files for Krusader won't move back to the cache drive
    7. Enable docker
    8. Check docker - nothing shows up

     

    I've rebooted a few times, and started and restarted Docker. I've also tried to get the old cache drive to Mount using Unassigned Devices, but the Mount is greyed out. I'm guessing I need to move the files that didn't move from the old cache to the new one, but I can't figure out how to access that drive right now.

     

    Any help would be appreciated! 

  5. Everything was working fine until a few days ago. Then, Deluge stopped auto-loading/deleting torrent files from the relevant folder. Then, after I restarted to see if that would fix it, I can't get into the WebUI and I'm getting the following errors when loading:

     

    2021-05-19 14:24:15 DEPRECATED OPTION: --cipher set to 'aes-128-cbc' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-128-cbc' to --data-ciphers or change --cipher 'aes-128-cbc' to --data-ciphers-fallback 'aes-128-cbc' to silence this warning.

     

    [warn] Unable to successfully download PIA json to generate token from URL 'https://privateinternetaccess.com/gtoken/generateToken'
    [info] 10 retries left
    [info] Retrying in 10 secs...

     

    I downloaded new files from PIA and have tried a few different of their endpoints that are on the list and that others who have had this issue noted as working.

     

    Any thoughts on what else to try?

     

    Thanks!

  6. Hey all,

    Was in the middle of upgrading my parity disk. I had unassigned the old one, assigned the new one, and rebuilt parity - that completed.

    Then, I unassigned a 4TB data drive, reassigned the old 10TB parity drive to that slot, and was rebuilding. All dockers off, nothing should have been writing to the array.

    Then, my power went out (it was past the 4TB mark on the rebuild, but I don't think that makes a difference).

    First step - order a new UPS, as the CyberPower one completely failed and went down with the power outage immediately. Check.

    Second step - This is where I'm uncertain. I think the right move is to boot up (array isn't set to auto start), unassign 10TB drive, reassign 4TB drive, rebuild parity, and then restart the process of swapping the drive, but I don't know for sure.

    Also, I know I'm dumb for this, but I didn't save a backup of anything before doing the rebuild.

    Thanks for any advice.

  7. I'd rather not have it offline as long as that process entails, so I'm all for an alternative.

     

    To be clear, process for this if I already have the new drive in the system, precleared, would be:

    1) Turn off array

    2) Unassign 10TB disk from parity

    3) Assign 12TB disk to parity

    4) Turn array back on

    5) Let it rebuild

    6) Once rebuilt, turn off array

    7) Unassign 4TB drive

    8 ) Assign 10TB drive to that drive

    9) Turn array back on and let it rebuild

    ?

  8. 12 minutes ago, Frank1940 said:

    What did you see on the monitor screen?

     

    You should probably setup the syslog server.  See here for instructions:

     

       https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-781601

     

    I am assuming that this is the first time this has happened.  If you have an SSD cache drive, I would be using that rather then the Flash Drive.  (It may be quite some time before it happens again...)

    Monitor just gave me the "Check cables, no signal" message.

     

    This is the first time it happened. I do have an SSD cache drive, so I'll get that set up - is that separate from the syslog server setup instructions you posted?

     

    And thanks, Johnnie - I'll check that out and work through any issues!

  9. Hey all,

     

    I've got my server set up to shoot me a health check at midnight every night. Woke up this morning and didn't have it in my inbox. Ethernet lights were off on the mobo, but the rest of the machine was on and lit up.

     

    I couldn't access the drives via Windows or the web interface, so I tried plugging a monitor in. Nothing. After tinkering a bit more and not having any other ideas, I pulled the UPS plug from the wall to see if that would initiate a power down. The UPS ran out (slowly...which is good ANY other time), and when it shut down, so did the server, which tells me that they weren't communicating.

     

    Rebooted the system, everything working and a parity check started (and the disk I was preclearing I resumed).

     

    Does this sound like a crash/freeze on unRaid's part? I probably had 1-2 Plex streams going and was preclearing a disk, but nothing else should have been going on. Does this just sometimes happen, or should I dig in a bit more before it happens again? And, if so, can I get logs other than the ones from once I rebooted? I downloaded the logs (and would attach), but they're only from after the reboot.

     

    Thanks!

    cybertron-syslog-20200304-1316.zip

  10. Hey all,

     

    Just got some read errors on a drive when doing a monthly parity check. I paused the check - should I just let it resume? And should I take the server offline for now, or let it still run? Device is still showing normal operation.

     

    I'm assuming next steps are get a new drive, remove the old one, install the new one, and rebuild?

     

    Thanks! (Edited to add diag to initial post as per first response)

    cybertron-diagnostics-20200101-1427.zip

  11. How's this look as a build?

     

    Case Antec 1200, Xigmatek Elysium, or PC-P80 (whatever I can get on eBay for $100-$150 first)

     

    Cages 4x iStarUSA BPN-DE350SS 3x5.25" to 5x3.5" Hot-Swap Cage https://www.newegg.com/Product/Product.aspx?Item=9SIADZY5ZD8883&cm_re=istar_hot_swap-_-16-215-342-_-Product
    CPU Ryzen 7 2700X https://www.amazon.com/AMD-Ryzen-Processor-Wraith-Cooler/dp/B07B428V2L/ref=sr_1_3?s=pc&ie=UTF8&qid=1528767508&sr=1-3&keywords=ryzen
    CPU Cooler Wraith Prism Included with CPU
    MoBo GIGABYTE AORUS GA-AX370-Gaming K7 https://www.amazon.com/GIGABYTE-GA-AX370-Gaming-K7-FUSION-Motherboard/dp/B06XF3R469
    GPU GeForce 210 https://www.amazon.com/Gigabyte-DDR3-1GB-Graphics-GV-N210D3-1GI-REV6-0/dp/B00FCKCSGC/ref=sr_1_1?ie=UTF8&qid=1528752931&sr=8-1&keywords=Geforce+210+GPU
    Cache Drive Samsung 860 EVO 500gb Owned
    PSU EVGA SuperNOVA 650 P2 Platinum https://www.amazon.com/EVGA-SuperNOVA-PLATINUM-Warranty-220-P2-0650-X1/dp/B010HWDPKW/ref=sr_1_fkmr0_3?s=electronics&ie=UTF8&qid=1528764155&sr=1-3-fkmr0&keywords=evga+650+fully+modular+titanium
    Memory 2x KVR21E15D8/8 Kingston 8GB (1x8G) 2RX8 PC4-2133P DDR4 ECC RAM https://www.ebay.com/itm/KVR21E15D8-8-KINGSTON-8GB-1X8GB-2RX8-PC4-2133P-DDR4-MEMORY/182737952634?epid=5018370312&hash=item2a8c07df7a:g:pl0AAOSwTPRaofyx
    SATA Exp Card 2x LSI SAS 9211-8i 8-port 6Gb/s PCI-E Internal HBA IT Mode flash https://www.ebay.com/itm/LSI-SAS-9211-8i-8-port-6Gb-s-PCI-E-Internal-HBA-Both-Brackets-IT-MODE/152937435505?epid=8016260560&hash=item239bc81171:g:~b8AAOSwZ2laob4M:sc:USPSPriority!10025!US!-1

     

    Drives - 20 total. A mix of 8TB and 4TB reds.

     

    Worries:

    Wraith Prism - solid enough?

    PSU - Enough juice for this? Enough hookups for 20 drives?

    Memory - That work for ECC with that MoBo? Is refurb RAM okay?

    SATA Card - That a good one? Should I plug into the MoBo SATA slots first, or the cards?

     

    Thanks!

×
×
  • Create New...