Jump to content

reggierat

Members
  • Posts

    231
  • Joined

  • Last visited

Posts posted by reggierat

  1. Does 6.2 still have the issue with drive spin-up and spin-down mis-reporting after waking from sleep.  About to make the jump from 6.1.9 and have been using the following command after wake

    for disknum in 0 `ls /dev/md* | sed "sX/dev/mdXX"`; do /usr/local/sbin/mdcmd spindown $disknum; done

     

    just wondering if this is still required?

  2. I'm currently running gfjardim's syncthing and was wondering if the config files and database are compatible with this.  I have a few remaining dockers i would like to finally get moved over linuxserver

     

    To be honest I'm not sure.  What I would do is either.

     

    1.  Clean config

     

    or

     

    2. Install this alongside the gfjardim version and use a different container name, ports and appdata.  Then examine the appdata and compare what data is within.  With some containers it is easy to see what needs to go where so to speak.  With others, not so easy.

     

    Fair enough, i suppose clean config is easy enough, was just hoping it could be even easier :)  I won't answer the other 2 threads ;)

  3. still the same s3 sleep doesnt work properly

    unraid: 6.0.1 , Dynamix S3 Sleep: 2015.08.13 default settings.

     

    s3 sleep acting up

     

    it works after a clean start/restart. after the set period of time it sleeps as it should. after waking up it does not sleep again.

     

    i reset the settings and only changed the timer to 15, excluded cache disk and opened the debug log mod.after the wake up from dynamix sleep; "s3 Tower s3_sleep: Disk activity detected. Reset timers." this is what i see even tho all the disks spun down and no activity at all. if i spin up all the disks and spin down all the disks again oddly enough it starts to work again. as such: "Jul 21 06:18:17 Tower s3_sleep: Extra delay period running: 9 minute(s)"

     

    im using 6.0.1 and the latest plugin (dynamix.s3.sleep 2015.04.28)

     

    what might be the cause?

    any suggestions welcome.

     

    Solution on previous page, its a known issue and there is a work around

  4. hi guys

     

    i have recently upgraded the  s3_sleep plugin to the latest version, as well as unraid from 6.0.1 to 6.1 RC2.

     

    problem is, after that upgrade, i have issues all -over. mostly with drives not spinning down, or if they have spun down, s3_sleep still picks them up as running (as per dashboard status)

     

    is there any way to try and troubleshoot? I have rolled back to 6.0.1

     

    some captures below :

     

    root@Storage:~# ps -elf | grep disk

    1 S root      5001    1  0  80  0 - 38196 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user0 -disks 32766 -o noatime,big_writes,allow_other

    1 S root      5012    1  0  80  0 - 55127 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user -disks 32767 2048000000 -o noatime,big_writes,allow_other -o remember=0

    0 S root    23475 22208  0  80  0 -  1275 pipe_w 23:20 pts/0    00:00:00 grep disk

    root@Storage:~#

     

     

    root@Storage:~# ps -elf | grep mnt

    1 S root      5001    1  0  80  0 - 38196 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user0 -disks 32766 -o noatime,big_writes,allow_other

    1 S root      5012    1  0  80  0 - 55127 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user -disks 32767 2048000000 -o noatime,big_writes,allow_other -o remember=0

    0 S root    26702 22208  0  80  0 -  1275 pipe_w 23:25 pts/0    00:00:00 grep mnt

     

    diagnostics with syslog is attached.

     

     

    Please advise what other troubleshooting i can maybe do?

     

    thx

     

    Neo_x

     

    This is a known issue and won't be addressed in the short-term as it is a result of the way limetech polls drive spun up/spun down status.  Basically the status is becoming out of sync when the server wakes up.  Work around in this thread

     

    http://lime-technology.com/forum/index.php?topic=39355.msg372872#msg372872

     

    basically add a post wake up command to either spin up or spin down all drives and this will work correctly.

  5. Appreciate the detailed thread Arched and was hoping for a little help.  Attempting to passthrough the USB 3.0 controller on my ASRock - B75 Pro3-M.

     

    Output of

    lspci | grep USB

     

    00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
    00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04)
    00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04)

     

    Bus 004
    Bus 004
    Bus 003 - Lexar
    Bus 003
    Bus 003
    Bus 002
    Bus 001 - Bus my device is listed on 

     

    output of readlink command

     

    ../../../devices/pci0000:00/00000:00:14.0/usb1

     

    i have run /usr/local/sbin/vfio-bind 0000:00:14.0 and confirm Bus 001 no longer appears if i run lsusb

    (have not added to go file yet, waiting to confirm it is working fine first)

     

    copy of XML

     

    <domain type='kvm'>
      <name>rTorrent</name>
      <uuid>a8d7f9a9-59a3-91a3-1ee0-19de892dd49f</uuid>
      <description>Torrent + VPN</description>
      <metadata>
        <vmtemplate name="Custom" icon="windows7.png" os="windows7"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
        <locked/>
      </memoryBacking>
      <vcpu placement='static'>1</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='1' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/cache/appdata/virtual-machines/ruTorrent/vdisk1.qcow2'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </disk>
        <controller type='usb' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:56:90:1b'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target port='0'/>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/rTorrent.org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'/>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='vmvga' vram='16384' heads='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </memballoon>
      </devices>
        <qemu:commandline> 
          <qemu:arg value='-device'/>
          <qemu:arg value='vfio-pci,host=00:14.0,bus=root.1,addr=00.1'/>
        </qemu:commandline>
    </domain>

     

    have tried addr=00.0 as well

     

    when starting VM i get the following error

     

    Error: internal error: early end of file from monitor: possible problem:

    2015-07-29T19:38:56.073222Z qemu-system-x86_64: -device vfio-pci,host=00:14.0,bus=root.1,addr=00.1: Bus 'root.1' not found

  6. I'm using

     

    for disknum in 0 `ls /dev/md* | sed "sX/dev/mdXX"`; do /root/mdcmd spindown $disknum; done

     

    on wake up to reset drive status on wake, otherwise sleep does not work for me, been like this since halfway through Beta cycle. Was previously using spinup but either works to get sleep functioning like normal after waking

  7. I'm having a problem with the Calibre Docker, i have passed through an additional variable /books/ which is mapped to /mnt/user/books/

     

    during the setup wizard i have set my book library to /books but it is not saving the setting.  If i stop and start the docker it reverts to /config

  8. hi all,

     

    i have a problem with a never sleeping parity drive. dynamix s3 sleep plugin is now on debug mode and is checking every minute the status:

    Stitch s3_sleep: Disk activity on going: sdb
    May 17 10:41:04 Stitch s3_sleep: Disk activity detected. Reset timers.

    its a seagate drive and it was sleeping under unraid 5.0.6: ST3000DM001-9YN166_Z1F1A22L

    the spin down delay is already set to 1 hour manually and not to default.

     

    does anyone have an idea? is it correct to have 3 different spinup groups for 3 drives?

     

    This is a known issue and won't be getting fixed, workaround here. http://lime-technology.com/forum/index.php?topic=39355.0

  9. how does s3_sleep determine disk activity? the reason im curious is that i'm experiencing an issue where if all my drives are spun down but anyone of them are still reporting a temperature reading in the GUI then the log for s3_sleep is reporting disk activity for that drive, as far as i can tell there is no actual disk activity since the drive is spun down.  to resolve the issue i have to spin up the affected drive manually and then once it spins down again it will stop reporting temperature.

     

    http://lime-technology.com/forum/index.php?topic=39355.0

    http://lime-technology.com/forum/index.php?topic=39145.0

  10. I created a new docker image and re-installed all my docker programs into it. However, there is a persistent problem I have been unable to solve. Under templates, there is a list of user created templates, but there are no other templates, even though I have gfjardm's repo saved under templates. Nothing. Any idea how to make those templates magically appear?

     

    I'm having the same issue, i have three repo's and none of the templates are loading, reported in the beta14 thread

  11. i'm having an issue with the s3 sleep plugin, whereby on boot the included/excluded disks are not being set correctly.  Once the server has booted i have to go into the s3 sleep settings page and toggled, excluded "yes, except cache" off and then back on again to have the disk settings set correctly.

     

    Some information is included http://lime-technology.com/forum/index.php?topic=37598.0 along with a seperate issue i'm having

     

    I have a look.

     

    I just noticed the updated s3_sleep plugin, I can confirm that this resolves my issue with excluded/included disks on startup.

     

    Thank you!

×
×
  • Create New...