clay_statue

Members
  • Posts

    36
  • Joined

  • Last visited

Posts posted by clay_statue

  1. I stopped everything dead silent and watched the disk activity and it was nil.  So I tried restarting the data rebuild and it's chugging along at 20 MB/s which will get the job done 3 days which is an acceptable timeline.

    I realize I might have borked it by running the unbalance to move the data off initially and was seeking wise council from the forum of elders.  I hope it'll be fine.  Thanks for your attention towards it.

  2. This is sub-optimal.  Most of the data on the recently emulated 6TB disk is just accumulated media that I don't care about but I have about 2.5TB of data (mostly movies of my dogs as puppies) that isn't critical but I'd highly prefer to keep it.

    I did a quick SMART test on the drive and it seems okay.  Originally tried unbalance to remove data off the drive to another on the array that is mostly empty but the speeds on that were also very slow (2.5 MB/s).  SO I am now trying to rebuild the disk but it's even slower!

    Why is my data only moving at a trickle?  I would prefer to shorten the timeline on recovering this drive from six months down to a day or two if possible.

    fractaltower-diagnostics-20240110-1843.zip

  3. Here's the syslog file and the last entries.  I deliberately cancelled the parity to see if it would crash, which it did a few minutes later.
     

    Nov 14 10:26:12 FractalTower kernel: docker0: port 1(veth440e05a) entered disabled state
    Nov 14 10:38:40 FractalTower kernel: docker0: port 1(vethb74e54b) entered blocking state
    Nov 14 10:38:40 FractalTower kernel: docker0: port 1(vethb74e54b) entered disabled state
    Nov 14 10:38:40 FractalTower kernel: device vethb74e54b entered promiscuous mode
    Nov 14 10:38:40 FractalTower kernel: eth0: renamed from veth6b04af4
    Nov 14 10:38:40 FractalTower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb74e54b: link becomes ready
    Nov 14 10:38:40 FractalTower kernel: docker0: port 1(vethb74e54b) entered blocking state
    Nov 14 10:38:40 FractalTower kernel: docker0: port 1(vethb74e54b) entered forwarding state
    Nov 14 10:40:38 FractalTower kernel: mdcmd (37): nocheck cancel
    Nov 14 10:40:38 FractalTower kernel: md: recovery thread: exit status: -4
     

    syslog.txt

  4. 13 hours ago, blaine07 said:

    Did you update to most recent docker tag today? What does it not do; nothing strikingly sticks Out on your log snippet…

     

    Sorry, meant to say that I'm getting an "Internal Server Error" when I try to access the webgui.  This is the same whether trying to access remotely or natively from within the lan direct with the IP address, so I doubt it's a NGINX proxy issue.

  5. This started happening today.  I tried rolling back to the previous docker image tag.  I cannot see any obvious problem in the nextcloud logs.
     

    Quote

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/


    Brought to you by linuxserver.io
    -------------------------------------

    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------

    User uid: 99
    User gid: 100
    -------------------------------------

    cont-init: info: /etc/cont-init.d/10-adduser exited 0
    cont-init: info: running /etc/cont-init.d/20-config
    cont-init: info: /etc/cont-init.d/20-config exited 0
    cont-init: info: running /etc/cont-init.d/30-keygen
    using keys found in /config/keys
    cont-init: info: /etc/cont-init.d/30-keygen exited 0
    cont-init: info: running /etc/cont-init.d/40-config
    cont-init: info: /etc/cont-init.d/40-config exited 0
    cont-init: info: running /etc/cont-init.d/50-install
    cont-init: info: /etc/cont-init.d/50-install exited 0
    cont-init: info: running /etc/cont-init.d/60-memcache
    cont-init: info: /etc/cont-init.d/60-memcache exited 0
    cont-init: info: running /etc/cont-init.d/70-aliases
    cont-init: info: /etc/cont-init.d/70-aliases exited 0
    cont-init: info: running /etc/cont-init.d/90-custom-folders
    cont-init: info: /etc/cont-init.d/90-custom-folders exited 0
    cont-init: info: running /etc/cont-init.d/99-custom-files
    [custom-init] no custom files found exiting...
    cont-init: info: /etc/cont-init.d/99-custom-files exited 0
    s6-rc: info: service legacy-cont-init successfully started
    s6-rc: info: service init-mods: starting
    s6-rc: info: service init-mods successfully started
    s6-rc: info: service legacy-services: starting
    services-up: info: copying legacy longrun cron (no readiness notification)
    services-up: info: copying legacy longrun nginx (no readiness notification)
    services-up: info: copying legacy longrun php-fpm (no readiness notification)
    s6-rc: info: service legacy-services successfully started
    s6-rc: info: service 99-ci-service-check: starting
    [ls.io-init] done.
    s6-rc: info: service 99-ci-service-check successfully started

     

     

    Really at a loss here.  It's been rock solid basically since I originally installed it.  Any thoughts would be appreciated.

  6. This started happening today.  I tried rolling back to the previous docker image tag.  I cannot see any obvious problem in the nextcloud logs.


     

    Quote

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/


    Brought to you by linuxserver.io
    -------------------------------------

    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------

    User uid: 99
    User gid: 100
    -------------------------------------

    cont-init: info: /etc/cont-init.d/10-adduser exited 0
    cont-init: info: running /etc/cont-init.d/20-config
    cont-init: info: /etc/cont-init.d/20-config exited 0
    cont-init: info: running /etc/cont-init.d/30-keygen
    using keys found in /config/keys
    cont-init: info: /etc/cont-init.d/30-keygen exited 0
    cont-init: info: running /etc/cont-init.d/40-config
    cont-init: info: /etc/cont-init.d/40-config exited 0
    cont-init: info: running /etc/cont-init.d/50-install
    cont-init: info: /etc/cont-init.d/50-install exited 0
    cont-init: info: running /etc/cont-init.d/60-memcache
    cont-init: info: /etc/cont-init.d/60-memcache exited 0
    cont-init: info: running /etc/cont-init.d/70-aliases
    cont-init: info: /etc/cont-init.d/70-aliases exited 0
    cont-init: info: running /etc/cont-init.d/90-custom-folders
    cont-init: info: /etc/cont-init.d/90-custom-folders exited 0
    cont-init: info: running /etc/cont-init.d/99-custom-files
    [custom-init] no custom files found exiting...
    cont-init: info: /etc/cont-init.d/99-custom-files exited 0
    s6-rc: info: service legacy-cont-init successfully started
    s6-rc: info: service init-mods: starting
    s6-rc: info: service init-mods successfully started
    s6-rc: info: service legacy-services: starting
    services-up: info: copying legacy longrun cron (no readiness notification)
    services-up: info: copying legacy longrun nginx (no readiness notification)
    services-up: info: copying legacy longrun php-fpm (no readiness notification)
    s6-rc: info: service legacy-services successfully started
    s6-rc: info: service 99-ci-service-check: starting
    [ls.io-init] done.
    s6-rc: info: service 99-ci-service-check successfully started

     


    Really at a loss here.  It's been rock solid basically since I originally installed it.  Any thoughts would be appreciated.

     

  7. On 6/2/2020 at 1:03 PM, rojarrolla said:

    Hi! I don´t know why nobody replied you, I did the same thing, bought a T320 diskless and I am using with 10 TB SAS drives from seagate. So far I have not dealt with Plex yet, however, It runs unraid and Im currently installing shinobi. The only thing I had to do was install a Perc H310 card flashed to LS so It could work with Unraid. 

     

    Is this because the default sata/sas ports available in the tower aren't usable by the unraid array?  I was looking at the HP ProLiant m350 and the unraid thread about that one said that the native raid controller didn't work with unraid and you needed to basically make use of the pci-e slots to connect sata drives and bypass the raid controller

    Additionally I see that the card you are talking about only has two SAS ports internally.  I am unfamiliar with SAS... how many drives can you run off that one card?  Just two?

  8. I cannot access shares on unraid from my windows laptop.  I have "remote access to LAN" and indeed my laptop can ping my unraid server and my router, so I do indeed have connection to devices on my LAN.  I can also access my dockers webGUI's.  The wireguard connection is working in every way *except* I cannot access my shares.

    Even if I type the network address into file explorer \\x.x.x.x\share it cannot access the share.

    I tried setting tunnels both with specified NAT port forwarding and going UPnP alternatively.  No dice.  Before OpenVPN was deprecated I was using it as a docker image and was able to get remote access to my shares no problem... so I dunno what's going on here?

    [SOLVED]
    Had to stop the array and add the wireguard network pool to Settings > Network Services > SMB > hosts allow = 10.253.0.0/24   After I did that I could manually enter \\serverip\share in File Explorer and then map the drive.  Big success!

  9. I think this is the relevant part of the log file...

     

    Jun 22 14:07:16 FractalTower rc.docker: redis: started succesfully!
    Jun 22 14:07:53 FractalTower rc.docker: Plex-Media-Server: started succesfully!
    Jun 22 14:11:10 FractalTower ntpd[2153]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
    Jun 22 14:13:00 FractalTower root: Fix Common Problems Version 2021.05.03
    Jun 22 14:13:01 FractalTower root: Fix Common Problems: Warning: Docker Application binhex-sonarr has an update available for it
    Jun 22 14:13:01 FractalTower root: Fix Common Problems: Warning: Docker Application nzbget has an update available for it
    Jun 22 14:13:01 FractalTower root: Fix Common Problems: Warning: Docker Application redis has an update available for it
    Jun 22 14:13:10 FractalTower root: Fix Common Problems: Error: Machine Check Events detected on your server
    Jun 22 14:13:10 FractalTower root: mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor.  Please use the edac_mce_amd module instead.
    Jun 22 14:13:10 FractalTower root: CPU is unsupported
    Jun 22 16:20:47 FractalTower webGUI: Successful login user root from 10.10.10.25
    Jun 22 16:22:10 FractalTower root: Fix Common Problems Version 2021.05.03
    Jun 22 16:22:12 FractalTower root: Fix Common Problems: Warning: Docker Application binhex-sonarr has an update available for it
    Jun 22 16:22:12 FractalTower root: Fix Common Problems: Warning: Docker Application nzbget has an update available for it
    Jun 22 16:22:12 FractalTower root: Fix Common Problems: Warning: Docker Application redis has an update available for it
    Jun 22 16:22:19 FractalTower root: Fix Common Problems: Error: Machine Check Events detected on your server
    Jun 22 16:22:19 FractalTower root: mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor.  Please use the edac_mce_amd module instead.
    Jun 22 16:22:19 FractalTower root: CPU is unsupported

     

  10. Cool and neat.
     

    Now I can run phoenix miner at the server level rather than inside my windows VM.  Of course I lose the use of my GPU for actually powering my monitors in such scenario but using a thin client and running the windows VM (my daily driver) headless would get around that. 

    • Like 1
  11. I was fidgeting with my IOMMU groups trying to free up some USB ports and unbeknownst to me, unRaid had taken control over my quad-port nic that usually get stubbed and passedthru to my pfSense VM.

    This meant that my cable modem was rawdoggin' straight to my server for a short while (less then 10 minutes) before I noticed what was up and unplugged it for the remainder of the tinkering.

    How concerned should I be? I don't have unRaid very well locked down at all because it's supposed to be sandboxed safely in my LAN behind pfSense.

  12. Excellent.  I was worried Google had shunted me to another dead-end thread (as it is wont to do), but this looks like the solution.

     

    Interestingly I had this exact same problem with the my Radeon 5700XT GPU, which has a different function for graphics and sound on the same bus.  The xml file automatically puts each function on a different bus, which in essence is splitting single device that isn't supposed to be split.  That's what's happening to our nic basically.

     

    For a more thorough explanation of this xml config issue see SpaceInvaderOne's video about advanced GPU passthrough techniques (under five minutes).  Although this video is about GPU's, the issue and the solution are essentially the same.

  13. OMFG... this is the magic bullet that finally solved my wonky VM issues!  I will be riding this wave of contentment and joy every time I get a clean shutdown from a VM for years to come.

     

    I've been having a heck of a time trying to get windows 10 to shutdown clean, it had been making unRaid freeze.  Fortunately I could still execute a graceful shutdown because I deliberately left my keyboard on a non-passthrough USB controller.  So although I was frozen out of the webgui and running headless, I could still blindly log into the terminal and type "powerdown".  That probably saved me from hundreds of hard reboots and god knows how many hours of parity checks.

     

    For dunderheads like me who are still lost in the weeds and are desperately seeking further clarification I will spell it out in the excruciating detail I wish that I had...

     

    1) Check your IOMMU groups for the number 1022:149c or 1022:1487 attached to a USB controller called "starship/matisse".  If you are trying to pass that through, that's (at least part of) what's causing boot and/or shutdown problems with your VM.

     

    solutions...

     

    2) Don't stub it and don't passthrough the entire controller, instead passthrough individual devices.  This didn't work in my case because my Focusrite external sound card was giving me demonic sound unless I passed through the whole USB controller.  (I also fell down the rabbit hole of fidgeting with MSI interrupts to no avail)

     

    or....

     

    3) Edit the /syslinux/syslinux.cfg file on your unRaid USB (don't use notepad, use notepad++ or wordpad if on windows).  You will see the various unRaid boot menu options listed in there.  Under the first menu option will be "append blehblehblehstuff initrd=/bzroot".  That's where you need to put "pcie_no_flr=1022:149c,1022:1487" without the " ".  If you typically boot from another menu option, put it there instead.  In my case the file looked like 

     

    default menu.c32
    menu title Lime Technology, Inc.
    prompt 0
    timeout 50
    label Unraid OS
      menu default
      kernel /bzimage
      append pcie_no_flr=1022:149c,1022:1487 vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
    label Unraid OS GUI Mode
    and so on...

    This worked to get windows shutting down nicely.

     

    However my Ubuntu VM was still not shutting down clean, even after appending the syslinux.cfg file.  That's because this bug is a quirk between the linux kernel and this specific USB controller.  Ubuntu was still flubbing the shutdown because it has more or less the same kernel as unRaid.  So the final step is to get Ubuntu to behave itself...

     

    4) Edit the Kernel Boot Parameters in /etc/default/grub by moving the cursor to the line beginning with "GRUB_CMDLINE_LINUX_DEFAULT" then edit that line, adding your parameter (pcie_no_flr=1022:149c,1022:1487) to the text inside the double-quotes after the words "quiet splash". (Be sure to add a SPACE after "splash" before adding your new parameter.) Click the Save button, then close the editor window.

     

    5) sudo update-grub

     

    6) restart ubuntu

     

    The earlier comments in this thread and the following three links are the source of everything I just described:

     

    https://forum.level1techs.com/t/attention-flr-kernel-patch-fixes-usb-audio-passthrough-issues-on-agesa-1-0-0-4b/151877

     

    https://old.reddit.com/r/VFIO/comments/eba5mh/workaround_patch_for_passing_through_usb_and/

     

    https://wiki.ubuntu.com/Kernel/KernelBootParameters

    • Like 1
  14. So I watched the video but this apparently only works when primary vdisk location is set to "auto", in which case increasing the size is fairly straightforward.


    How do you increase the size of a manually assigned vdisk??  Near as I can tell if you set a manually assigned vdisk there is no way to increase it's size and the only way to increase capacity is to add a second vdisk (D:/ drive).

  15. So every time I shutdown a Windows VM, unRaid crashed and became inaccessible from the webGUI. I went through every "solution" I found available to me. Legacy vs UEFI, manually adding and altering the registry to allow MSI interrupts, uninstalling various device drivers... EVERYTHING.

    Turns out there is an issue with AMD GPU's when using double monitors. If both monitors are connected via DisplayPort things can get buggy. However, if you use one DP and one HDMI then somehow this works out nicely and VM's can shut down clean without crashing unraid.

     

    FYI The GPU in question was the Sapphire Pulse 5700XT

  16. Don't you hate when you finally find the thread with your exact issue and it just ends without a solution?

     

    I have a Windows 10 VM with GPU passthrough that shuts down cleanly no problem.  However the Windows 10 VM that has GPU passthrough AND nvme passthrough crashes unraid.  I can still do a graceful powerdown from the command line, because despite running headless unraid still accepts keyboard input so type "root / password / powerdown" will shutdown the system without having to do a hard reset.  However it would be nice to figure out what about nvme passthrough is causing it to crash unraid when you shutdown the VM.