gerard6110

Members
  • Posts

    87
  • Joined

  • Last visited

Posts posted by gerard6110

  1. Now on 6.9.2.

    Issue not solved. Still not spinning down due to read SMART ..

    May 19 13:23:56 Stacker emhttpd: read SMART /dev/sdh
    May 19 13:24:02 Stacker emhttpd: read SMART /dev/sdi
    May 19 13:24:11 Stacker emhttpd: read SMART /dev/sdl
    May 19 13:24:17 Stacker emhttpd: read SMART /dev/sdm
    May 19 13:48:48 Stacker emhttpd: spinning down /dev/sdf
    May 19 13:48:53 Stacker emhttpd: spinning down /dev/sdm
    May 19 13:54:27 Stacker emhttpd: spinning down /dev/sdl
    May 19 13:54:42 Stacker emhttpd: spinning down /dev/sdg
    May 19 13:56:36 Stacker emhttpd: spinning down /dev/sdi
    May 19 13:57:18 Stacker emhttpd: spinning down /dev/sdk
    May 19 14:00:15 Stacker emhttpd: read SMART /dev/sdg
    May 19 14:00:15 Stacker emhttpd: read SMART /dev/sdl
    May 19 14:00:20 Stacker emhttpd: read SMART /dev/sdb
    May 19 14:00:36 Stacker emhttpd: read SMART /dev/sdf
    May 19 14:02:49 Stacker emhttpd: spinning down /dev/sdh
    May 19 14:08:13 Stacker emhttpd: read SMART /dev/sdh
    May 19 14:08:26 Stacker emhttpd: read SMART /dev/sdm
    May 19 14:08:26 Stacker emhttpd: read SMART /dev/sdk
    May 19 14:08:34 Stacker emhttpd: read SMART /dev/sdi
    May 19 14:17:13 Stacker emhttpd: spinning down /dev/sdb
    May 19 14:26:34 Stacker emhttpd: spinning down /dev/sdg
    May 19 14:31:02 Stacker emhttpd: read SMART /dev/sdg

  2. Now on 6.9.2.

     

    Issue not solved. Still not spinning down due to read SMART ..

    May 19 13:23:56 Stacker emhttpd: read SMART /dev/sdh
    May 19 13:24:02 Stacker emhttpd: read SMART /dev/sdi
    May 19 13:24:11 Stacker emhttpd: read SMART /dev/sdl
    May 19 13:24:17 Stacker emhttpd: read SMART /dev/sdm
    May 19 13:48:48 Stacker emhttpd: spinning down /dev/sdf
    May 19 13:48:53 Stacker emhttpd: spinning down /dev/sdm
    May 19 13:54:27 Stacker emhttpd: spinning down /dev/sdl
    May 19 13:54:42 Stacker emhttpd: spinning down /dev/sdg
    May 19 13:56:36 Stacker emhttpd: spinning down /dev/sdi
    May 19 13:57:18 Stacker emhttpd: spinning down /dev/sdk
    May 19 14:00:15 Stacker emhttpd: read SMART /dev/sdg
    May 19 14:00:15 Stacker emhttpd: read SMART /dev/sdl
    May 19 14:00:20 Stacker emhttpd: read SMART /dev/sdb
    May 19 14:00:36 Stacker emhttpd: read SMART /dev/sdf
    May 19 14:02:49 Stacker emhttpd: spinning down /dev/sdh
    May 19 14:08:13 Stacker emhttpd: read SMART /dev/sdh
    May 19 14:08:26 Stacker emhttpd: read SMART /dev/sdm
    May 19 14:08:26 Stacker emhttpd: read SMART /dev/sdk
    May 19 14:08:34 Stacker emhttpd: read SMART /dev/sdi
    May 19 14:17:13 Stacker emhttpd: spinning down /dev/sdb
    May 19 14:26:34 Stacker emhttpd: spinning down /dev/sdg
    May 19 14:31:02 Stacker emhttpd: read SMART /dev/sdg

  3. As SpaceInvaderOne's excellent videos will cover 6.9.x from now on, I decided to put all my three servers on it, despite one of them without WOL (and basically having to run up and down as it is a media - and main backup server not running all the time). So today I went in deep trying to fix it. Which I found / did!

     

    Apparently, the command "Custom commands before sleep:" in the Sleep plugin "ethtool -s eth0 wol g" is not being executed. 

     

    By just removing the command from the Sleep plugin, and creating a bash script to run at each array start the WOL is operational again.

     

    Steps:

    - If not already, install the "User Scripts" plugin;

    - Optionally and recommended, if you want it to appear on the menu tabs, install plugin "Custom Tab"

    - Create a new plugin, like "ReActivateWOL", with the following lines:

     

    #!/bin/bash
    ethtool -s eth0 wol g

    and save changes.

     

    - Instead of "Schedule Disabled" change to "At Startup of Array" (maybe "At Stopping of Array" also works, did not try that yet)

    - and click Done.

     

    To ReActivateWOL:

    - Stop Array

    - Start Array

     

    Now WOL should work again after Sleep and even after Shutdown (in my case).

     

    Hope it will work for you too.

  4. As SpaceInvaderOne's excellent videos will cover 6.9.x from now on, I decided to put all my three servers on it, despite one of them without WOL (and basically having to run up and down as it is a media - and main backup server not running all the time). So today I went in deep trying to fix it. Which I found / did!

     

    Apparently, the command "Custom commands before sleep:" in the Sleep plugin "ethtool -s eth0 wol g" is not being executed. 

     

    By just removing the command from the Sleep plugin, and creating a bash script to run at each array start the WOL is operational again.

     

    Steps:

    - If not already, install the "User Scripts" plugin;

    - Optionally and recommended, if you want it to appear on the menu tabs, install plugin "Custom Tab"

    - Create a new plugin, like "ReActivateWOL", with the following lines:

     

    #!/bin/bash
    ethtool -s eth0 wol g

    and save changes.

     

    - Instead of "Schedule Disabled" change to "At Startup of Array" (maybe "At Stopping of Array" also works, did not try that yet)

    - and click Done.

     

    To ReActivateWOL:

    - Stop Array

    - Start Array

     

    Now WOL should work again after Sleep and even after Shutdown (in my case).

     

    Hope it will work for you too.

  5. On 4/8/2021 at 10:40 AM, jonathanm said:

    You are missing the distinction of a controlled shutdown / reboot vs. crash / lost power situation. The issue seems to be that on some systems and configuration the proper methods of shutting down and rebooting the array via the GUI or command line are not acting correctly, forcing a hard shutdown and triggering the appropriate parity check.

     

    Pushing the shut down button in the Unraid management GUI should NOT cause a parity check on subsequent boot, but something is going wrong in the shutdown routine.

     

    I believe there is a timeout to kill processes that is being ignored, and altering a setting and saving the changes fixes the issue. At least that's my recollection of the resolution.

    https://forums.unraid.net/bug-reports/stable-releases/691-parity-check-after-clean-shutdown-or-reboot-r1358/?tab=comments#comment-13674

     

     

    You are absolutely right. It happened to me too. Changing time outs didn't help.

    Just by coincidence I played around with the vm-guest additon after updating to the latest virtio driver. 

    It then worked. For this server it was important as it is one with a daily VM and basically shutdown after use (I believe sleep didn't work).

     

    But not on my other server, where no windows VM is running; only an XPEnology NAS - for which I don't think there is a guest VM addition. There I still have a parity check after controlled rebooting from dashboard.

    Fortunately that (small) server is supposed to be running all the time, so not really a problem to cancel the parity check after occasional reboot. 

    But just reporting that somethign is still broken.

  6. After upgrading to above beta version I can no longer WOL that server.

    The g WOL option was enabled in the "Sleep" plugin (as before).

    The ethtool ... instruction was never required before, but after upgrading it was there in the box Custom commands before sleep. This did not help.

    Than I removed it (to restore the situation as before). Again to no avail.

    Then re-added the instruction to both the before and after box. Again to no avail.

    Note: Instead of sleep I had to use shutdown, because of an SAS addon card which would otherwise not initialize and thus missing a number of harddrives. But as WOL worked while using the shutdown option in the sleep plugin, it was even better; even less power usage.

     

    Any advice?

  7. Hi ViproXX,

    could you share your script for issuing the nvidia-smi -pm 1 command, particularly as I have two identical GTX 1660 super videocards used for two separate gaming VMs (for in-home streaming) and a normal VM (but not always running). Does one use its UUID or how does one send the command to the correct GPU?

     

    Note: As at least one GPU is used for one gaming VM or the normal VM, I already have scripts (to run in the background) to shutdown one VM, wait for 30 secs and then start another VM. This works perfectly. And then it would be nice to have in the same script add the -pm 1 command.

     

  8. To Johnnie: Thanks for the explanation. As going back to pre 6.9 is therefore quite an issue (because of the major system changes), my suggeston would be that upgrading to 6.9 would only be allowed after a backup of the flash drive; at least a big warning to be displayed, like

     

    "WARNING: If you have a cache pool, a flash backup is mandatory, because if going back to pre 6.9 after upgrading to 6.9 you WILL loose your cache pool and therefore possibly all your dockers and VMs (if you have setup your system following our advice; on the cache only)".

  9. To piyper: I did not try that and was not the purpose of the thread. My purpose was only to have the license key on a separate USB (or at least at User's option) that in case that USB would become read-only, as happened to me two times now (albeit on two different systems), I would not have to go to Limetech to ask for a replacement license key, particularly as the 2nd time my USB become read-only it was already within 4 months (still don't know why this has happened and no tool available to fix it, because USB hardware issue). For Limetech it would not matter as the key is and shall remain linked to one and the same USB-ID. So basically I would like to see:

    USB1, with:

    - UNRAID OS

    - As per User's option: License Key or optionally on USB2

    Boot sequence:

    - Boot from USB1

    - (First try to) read License key on USB2 and authenticate (therefore, even if USB2 had become Read-Only)

    - If not on USB2 proceed with USB 1 to check License key and authenticate

    - Continue boot

  10. Hi,

    After installing 6.9.0.beta22 and updating the VMs (as per SpaceInvader One video), I can no longer connect to my Windows 10 VMs (via remote desktop). With nVidia video passthrough, I have no VNC.

    Before updating the VMs I still could.

    The only change would be virtio to virtio-net.

    Manually changing it back to virtio did not help, obviously.

    So what else has changed?

    Also, when booting the VMs they run at 100% (all 6 cores assigned to it).

    Of course I could go back to 6.8.3 stable but wish to stay on this one.

    Any thoughts?

  11. OK, doron, no worries.

    Thank you very much for spending the time and trying to get this to work.

    Actually it would be nice if Limetech would incorporate this, so that the re-issuing of license keys is solved (particularly in case within 12 months from the 1st replacement; like in my case, I must hope my current UNRAID usb does not die for one year). I hope one day Limetech makes it such that the license key can be put on a second (loaded as read only, and thus much less likely to go bad) LICENSE usb, with the system itself on the main UNRAID usb and where unraid first tries to read it from the license usb and if not found, read from its own usb (as per users option).

    For Limetech it would not matter because one still needs a license key linked to a usb ID (no problem with that).

    And if the system usb then gets bad it can be easily replaced using a backup.zip and of we go.

    Once again, thanks alot for trying.

  12. You're right.

    Unintentionally I booted without the BD and the interface was white.

    With your new bzoverlay the boot process completed succesfully, that is:

    - the first time the BD was mounted, but showing O space red, completely full also the disklight kept on flashing;

    - I then turned on automount and rebooted

    - At least the BD showed normal colors, however

    - I had to turn off parity check as it was running, apparently due to an unclean shutdown

    - I tried another reboot and agin parity check was running

     

    Here is the new mount output:

    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755)
    /dev/sdb1 on /license type vfat (ro,shortname=mixed)
    /dev/sda1 on /boot type vfat (rw,noatime,nodiratime,dmask=77,fmask=177,shortname=mixed)
    /boot/bzmodules on /lib/modules type squashfs (ro)
    /boot/bzfirmware on /lib/firmware type squashfs (ro)
    hugetlbfs on /hugetlbfs type hugetlbfs (rw)
    /mnt on /mnt type none (rw,bind)
    tmpfs on /mnt/disks type tmpfs (rw,size=1M)
    /dev/md1 on /mnt/disk1 type xfs (rw,noatime,nodiratime)
    /dev/nvme1n1p1 on /mnt/cache type btrfs (rw,noatime,nodiratime)
    shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)
    shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)
    /dev/sda1 on /mnt/disks/BOOTDISK type vfat (rw,noatime,nodiratime,nodev,nosuid,umask=000)
    /mnt/cache/system/docker/docker.img on /var/lib/docker type btrfs (rw)
    /mnt/cache/system/libvirt/libvirt.img on /etc/libvirt type btrfs (rw)

     

    Also no initramfs unpacking failed message.

    Also plugin update worked.

    So, almost there? apart from the unclean shutdown.

    Oh and 16GB flash (UR) now showing as 8 GB. Suffiicient for unraid of course, but still ...

     

  13. OK, noted, although I'm somewhat confused.

    During boot bzimage, bzroot and bzoverlay are loaded OK. To differentiate between the original UNRAID (UR) flash and the BOOTDISK (BD) I changed the display settings before preparing the BD. UR I changed to black, whereas afterwards I changed BD to white (my usual theme). With thohell's bzoverlay it does boot with the BD because unraid dashboard opens in white. Also when changing the BD syslinux.cfg, which I had to do to see the log running during boot it uses the BD (in view of passing through my videocards, the default is video off). But indeed the mount points are not as we would like.

     

    So now, running with your new bzoverlay,

    From the running log during boot (only listing the still visible lines with errors):

    - depmod warning could not open modules builtin

    - mount /dev/shm can't find in /etc/fstab

    - modprobe fatal: module bonding not found in directory /lib/modules/4.19.107-Unraid

    - cannot find device "bond0"

    - /etc/rc.d/rc.inet1 line 241: /proc/sys/net/ipv6/conf/eth0/disable-ipv6: no such file or directory

    - modprobe warning module it87 not found in dir /lib/modules/4.19.107-Unraid

    - modprobe warning module k10temp not found in dir /lib/modules/4.19.107-Unraid

     

    IPv4 address: 169.254.28.39  => which is completely wrong.

  14. @doron:

    Please note the output of "mount" with thohell's bzoverlay: 

     

    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755)
    /dev/sdb1 on /boot type vfat (rw,noatime,nodiratime,flush,dmask=77,fmask=177,shortname=mixed)
    /boot/bzmodules on /lib/modules type squashfs (ro)
    /boot/bzfirmware on /lib/firmware type squashfs (ro)
    hugetlbfs on /hugetlbfs type hugetlbfs (rw)
    /mnt on /mnt type none (rw,bind)
    tmpfs on /mnt/disks type tmpfs (rw,size=1M)
    /dev/md1 on /mnt/disk1 type xfs (rw,noatime,nodiratime)
    /dev/nvme0n1p1 on /mnt/cache type btrfs (rw,noatime,nodiratime)
    shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)
    shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)
    /dev/sda1 on /mnt/disks/BOOTDISK type vfat (rw,noatime,nodiratime,nodev,nosuid,umask=000)
    /mnt/cache/system/docker/docker.img on /var/lib/docker type btrfs (rw)
    /mnt/cache/system/libvirt/libvirt.img on /etc/libvirt type btrfs (rw)

     

    After reboot, with new doron bzoverlay: Sorry to tell you, but no full boot ...