devros

Members
  • Posts

    70
  • Joined

  • Last visited

Posts posted by devros

  1. It's been a bad year for boot drives for me.  My 2nd one this year now seems to be on the way out.  Unfortunately my SM motherboard doesn't have any USB 2 ports.  For now I can shut the server down, pull the USB drive, repair it on my Mac and my unraid server will boot again, but it's now getting to the point it will only last for a few days before linux kicks it offline.  dmesg is filled with:

     

    [349517.230113] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230115] blk_update_request: I/O error, dev sda, sector 625365 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
    [349517.230123] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230124] blk_update_request: I/O error, dev sda, sector 625366 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
    [349517.230132] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230133] blk_update_request: I/O error, dev sda, sector 625367 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
    [349517.230141] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230143] blk_update_request: I/O error, dev sda, sector 625368 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
    [349517.230151] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230152] blk_update_request: I/O error, dev sda, sector 625369 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
    [349517.230160] sd 0:0:0:0: [sda] tag#0 access beyond end of device
    [349517.230167] sd 0:0:0:0: [sda] tag#0 access beyond end of device
     

    same thing happened to me in June so I can't automatically do a license transfer to a new thumb drive.

     

    I know the main bit of advice here is to use a USB2 port, but since I have none, does using a small USB2 hub plugged into a USB3 port make any sense?

     

    Also, based on the dmesg errors, I think I'm running out of runway before this totally dies.  Is it possible to have the system allow me to do another USB change within a year.  I have the full pro License.

     

    thanks,

    dev

  2. I recently did the upgrade to 1.5 and can report basically the same things @dumurluk had with 1.5.  Lots of CMOS resets that afternoon :( 

     

    Over the past few years I've ordered millions of dollars of SM server gear for our company's colos.  I think I'm just going to forward this thread to our rep and tell them to escalate till it's fixed.

  3. I just updated the docker to the latest version.  Running 6.1.71.  If I log in locally, there are no adopted devices.  If I go in through the cloud it shows the site online with the right number of devices, clients, etc, but when I click launch, it connects and there is nothing there.  If I relaunch the controller, I'll get alerts that all of my devices have reconnected, but nothing there in the GUI.

     

    anything obvious I might be missing?

     

    Thanks,

     

    -dev

  4. On 8/9/2020 at 9:22 PM, teiger said:

    Hello,

    I'm still at the beginning but quite interested in ZFS since I only want to use ssd's.

    Sadly, I already created a zfs pool with the "zpool create ABC raidz /dev/sdb /dev/sdc" and now when I try to type "zpool create -m /mnt/SSD SSD mirror sdx sdy"  the message:

     

    
    /dev/sdb is in use and contains a unknown filesystem.
    /dev/sdc is in use and contains a unknown filesystem.

    is shown.

    I'm afraid that the mount point may be different when I don't use the zpool create -m command.

     

    I tried deleting the system (writing 0s for the first 100mb) but it didn't help.

    wipefs -a /dev/sdx

     

  5. Two things I always do when repurposing a drive(not that it applies to this situation) is:

     

    1) Remove all partitions

    2) wipefs -a /dev/sdX

     

    I wish I'd known about that last one much earlier.  Would have saved me a lot of grief with removing MD superblocks, ZFS metadata, etc...

  6. Looking like the Plex server is going to survive the night without me having to pause.  I was originally just going to remove a drive, but then realized I had a larger one that was already pre-cleared around, so figured I would just try to kill two birds with one reboot.

  7. Two questions here.

     

    1) if I'm running the command "dd bs=1M if=/dev/zero of=/dev/md1 status=progress" in screen so I can zero the drive and then remove it from the array without losing parity, is reason I couldn't just suspend the process this evening when the server is going to be under high use and then resume it much later tonight?  Based on what it's doing I think that should be ok.

     

    2) if I have a new, bigger drive that I have already pre-cleared could I just change the drive assignment and preserve parity?

  8. I'm running the clear array drive script in a screen right now.  In the evening, my server is under a pretty heavy load.  Is there any issue in just doing a ctrl-z to pause the process and then resume it later?

     

    2nd question.  I have a cleared a drive and rather than remove it, want to replace it with a bigger drive that has already been pre-cleared, can I do that all in one step while preserving parity?

  9. Yup.

     

    Not only do I run my plex server off this motherboard, but I have about 40-50 HBAs in production at work, all on SM motherboards.  Sometimes with as many as 4 packed right together.  

     

    In 12 years I've never seen this happen.  Not only with the server motherboards at work, but also with this motherboard and the previous SM one on my last server.

    • Thanks 1
  10. On 11/21/2019 at 10:49 AM, ramblinreck47 said:

    @burg3, @rinseaid. @cemaranet, and @cheezdog can any of y’all verify if these are the settings that need to be done to get IPMI/BMC working with QuickSync active at the same time? I’ve been really thinking about going with a E-2278G (if Provantage can ever get one) and a X11SCA-F for my Plex server upgrade. I just want to completely understand what settings are needed to make it work if I go this route.

     

     

    C9EF2ED8-1295-402D-858C-039C1F719F03.jpeg

     

    Looks like there is a new BIOS/IPMI Firmware out very recently:

     

    BIOS: 1.1

    IPMI: 01.23.04

     

    No real release notes unfortunately.

     

    Currently I'm running unRAID 6.8.0, have a VGA display, IPMI/LOM console, and Quick Sync with my Plex docker all working great.

     

    My BIOS setting are the same as above, except there no longer seems to be a "Primary PCIE" option since I did the BIOS upgrade.

     

    I have "i915.disable_display=1" in syslinux.cfg

     

    and the following in my "go" file

     

    modprobe i915
    chown -R nobody:users /dev/dri
    chmod -R 777 /dev/dri 

     

    Happy to have been the guinea pig here.  Aside from the "Primary PCIE" option disappearing I'd be curious to know if anyone else notices any other differences with the upgrades.

    • Thanks 1
  11. I built out a new Unraid server several years ago to replace and CentOS 7 server running docker compose and several ZFS RAID2Zs which is the ZFS equivalent of a RAID6.  Since I was using new drives for Unraid, the ZFS plugin was key to me making that choice so I could easily hook up those enclosures, mount those filesystems and just copy over all my content.

     

     As was stated above, ZFS on Unraid is the same ZFS on any other linux distro.  As long as you are comfortable with the CLI it should be all good.

     

    I run several ZFS production systems at work.  Some are multiple HDD RAID2Zs pooled together for almost half a PB of storage.  That's been running stable for 3-4 years.  We have more important DB servers running mirrored HDD pools with SSD caching that we use for the snapshotting.  Also been running those 3-5 years, many of them on two bonded 10G NICs.  Many of these are just on the stock CENT 7 kernel which is still 3.10.x we recently upgraded the kernels on some of those to the latest stable 5.3.x kernel so we could do some testing with some massive mirrors(24 x 2 on 12G backplanes) with NVMe caching(we needed the improved NVMe support in 5.3.x) and the performance has been incredible.

     

    In 4 years we had one issue that came up where performance went to shit, and we needed to try a reboot quickly to get the system back online so we weren't able to determine if it was a ZFS or NFS issue, but all was good on a reboot.

     

    Probably more info than you needed, but wanted to answer your 10G question and put something in this thread for people to read later about what I did personally and what our company has done with great results with ZFLOL.

     

    Cheers,

     

    -dev

    • Like 1