ryan8382

Members
  • Posts

    40
  • Joined

  • Last visited

Everything posted by ryan8382

  1. I'm trying to run a VM with PlayOn specifically. When i try and and run a recording it errors out. I contacted PlayOn support and they indicated that the CPU goes to 100%. I have added more CPU but it keeps failing out. Only thing i can think of is to change Affinity settings of everything. Has anyone tried getting this to run?
  2. I have tried through the app page. I checked the Docker Hub URL and it gets a 404 error. https://hub.docker.com/r/siwatinc/homebridge_gui_x_unraid/ I then went to https://hub.docker.com/r/siwatinc then clicked to homebridge_gui link and tried again and get the following error. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='HomeBridgewithwebGUI' --net='host' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'aptpackages'='ffmpeg' -e 'packages'='homebridge-pilight homebridge-info homebridge-wemo' -v '/mnt/user/appdata/homebridge':'/root/.homebridge':'rw' 'siwatinc/homebridge_gui_x_unraid' Unable to find image 'siwatinc/homebridge_gui_x_unraid:latest' locally /usr/bin/docker: Error response from daemon: pull access denied for siwatinc/homebridge_gui_x_unraid, repository does not exist or may require 'docker login'. See '/usr/bin/docker run --help'. The command failed.
  3. When trying to use the HomeBridge GUI version I'm getting the following error. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='HomeBridgewithwebGUI' --net='host' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'aptpackages'='ffmpeg' -e 'packages'='homebridge-pilight homebridge-info homebridge-wemo' -v '/mnt/user/appdata/homebridge':'/root/.homebridge':'rw' 'siwatinc/homebridge_gui_x_unraid' Unable to find image 'siwatinc/homebridge_gui_x_unraid:latest' locally /usr/bin/docker: Error response from daemon: pull access denied for siwatinc/homebridge_gui_x_unraid, repository does not exist or may require 'docker login'. See '/usr/bin/docker run --help'. Is there a login for this repo now? Everything was working.
  4. So I have found some interesting things. If I plug in a new disk so that I can preclear it on the server. My cache drive drops. I have to stop the array reboot unplug the drive then plug back in then it comes back. This is no matter which SATA port i choose. I'm only using 10 drive out of my 12. I was seeing I think that was related to the USB being used on my Mac when I set it up. I moved that file and that error went away. Not sure why it was trying to read the _.Plus.key. As for when I try and add a disk to preclear I have no clue what is happening. I don't have any other issues. I do see this error ata1.00: failed command: WRITE FPDMA QUEUED. Only thing I can find when searching is related to bad PSU. tower-diagnostics-20180313-1649.zip
  5. I was just replacing one of my cache drives when I ran into a weird issue. First I noticed that one of the drives was missing. I assumed it had died. So i plugged in the new drive but it wasn't detected. I checked the logs and saw the line below shfs: error: get_key_info, 584: Invalid argument (22): get_message: /boot/config/._Plus.key (-3) I forgot I had a drive space set for an HD that wasn't there. Once I removed that it saw the new SSD. I then went back and counted my drives. I have 9 HD and 2 SSD. That should only be 11 disks right? The drive limit for plus is 12 total? What am I missing? tower-diagnostics-20180306-2037.zip
  6. I have a pretty nice UnRaid box and would like do some Vmware lab work. I would like to install some ESX VMs to test setups. I was able to get 6.5 to run but it doesn't detect and network adapters. What are the tips and tricks for getting that to run? Is there a possible OVA import feature coming?
  7. After doing that reboot and deleting the docker.img. Now it came up with docker started. I'm adding a container now to see if it is working and will post the diag later. No clue as to what happened.
  8. Deleted Docker.img and go to reboot and get the following. Oct 19 11:35:35 Tower emhttp: Unmounting disks... Oct 19 11:35:35 Tower emhttp: shcmd (6865): umount /mnt/disk1 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk1: target is busy Oct 19 11:35:35 Tower root: (In some cases useful info about processes that Oct 19 11:35:35 Tower root: use the device is found by lsof( or fuser(1).) Oct 19 11:35:35 Tower emhttp: shcmd (6866): umount /mnt/disk2 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk2: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6867): rmdir /mnt/disk2 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk2': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6868): umount /mnt/disk3 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk3: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6869): rmdir /mnt/disk3 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk3': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6870): umount /mnt/disk4 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk4: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6871): rmdir /mnt/disk4 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk4': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6872): umount /mnt/disk5 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk5: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6873): rmdir /mnt/disk5 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk5': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6874): umount /mnt/disk6 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk6: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6875): rmdir /mnt/disk6 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk6': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6876): umount /mnt/disk7 |& logger Oct 19 11:35:35 Tower root: umount: /mnt/disk7: mountpoint not found Oct 19 11:35:35 Tower emhttp: shcmd (6877): rmdir /mnt/disk7 |& logger Oct 19 11:35:35 Tower root: rmdir: failed to remove '/mnt/disk7': No such file or directory Oct 19 11:35:35 Tower emhttp: shcmd (6878): umount /mnt/cache |& logger Oct 19 11:35:35 Tower root: umount: /mnt/cache: target is busy Oct 19 11:35:35 Tower root: (In some cases useful info about processes that Oct 19 11:35:35 Tower root: use the device is found by lsof( or fuser(1).) Oct 19 11:35:35 Tower emhttp: Retry unmounting disk share(s)...[/move][/move][/move] I have had to do a shutdown -r now a few times. Not sure if this is related.
  9. Same error on Disk1 Oct 19 10:46:28 Tower emhttp: shcmd (4941): /etc/rc.d/rc.docker start |& logger Oct 19 10:46:28 Tower root: starting docker ... Oct 19 10:47:01 Tower emhttp: shcmd (4959): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/disk1/docker.img' /var/lib/docker 40 |& logger Oct 19 10:47:01 Tower root: /mnt/disk1/docker.img is in-use, cannot mount Oct 19 10:47:01 Tower emhttp: err: shcmd: shcmd (4959): exit status: 1 I'm gong to guess my next option is to roll back to previous version?
  10. After upgrading to 6.2.1 and swapping out a drive Docker won't start. My docker image is directly on cache. So it wasn't associated with the drive being swapped. This is what shows in the logs. Oct 19 09:23:01 Tower emhttp: shcmd (3242): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/cache/docker.img' /var/lib/docker 40 |& logger Oct 19 09:23:01 Tower root: /mnt/cache/docker.img is in-use, cannot mount Oct 19 09:23:01 Tower emhttp: err: shcmd: shcmd (3242): exit status: 1 Oct 19 09:24:02 Tower emhttp: shcmd (3259): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/cache/docker.img' /var/lib/docker 40 |& logger Oct 19 09:24:02 Tower root: /mnt/cache/docker.img is in-use, cannot mount Oct 19 09:24:02 Tower emhttp: err: shcmd: shcmd (3259): exit status: 1 tower-diagnostics-20161019-0914.zip
  11. I have already removed that container. I'm going to work on that one. I have a limit setup on my cache pool if that is what it is referring to. I think it is hitting the limit on the pool. I would assume the mover would kick it. Going to look at that this weekend I hope. I have a AOC-SAS2LP-MV8 card in there I'm not sure if those drives are on there. I will check everything on the system. Thanks for pointing that out. This tends to indicate a remnant of an older version still there, which can cause problems. * At the end of the 6.1.9 syslog, there is a failure of TimeMachine and AFP. With those other issues. I'm thinking of backing up the config related items and basically do a fresh install. I could try 6.2. again and see how it goes. Thanks for the help. I will update the post when i have more detail.
  12. It did complete at a reasonable time frame. It was a full rebuild. I did a new config setup thinking that it might have been related. I think I going to hold off for the next release before I try again. Wife didn't like server being down. Sent from my iPhone using Tapatalk
  13. Yes it finally finished. Sent from my iPhone using Tapatalk
  14. Figured I would update this. My server is now working running 6.1.9. I hope you guys can figure out what the bug is in the beta. Sent from my iPhone using Tapatalk
  15. I was seeing mine stay up from about 40min to 2hrs. It responds to pings but webUI is sluggish and docker doesn't work. VM also just stops. One time I was using it when it quit. Sent from my iPhone using Tapatalk
  16. Try downgrading from the beta to 6.1.9. My server appears to be working again at that level. I hope they find something in the logs something isn't right.
  17. I just noticed that someone else is complaining about something else very similar to me. https://lime-technology.com/forum/index.php?topic=48360.0 I have now downgraded to 6.1.9 and am seeing the exact same behavior that I was seeing in the latest beta. I'm still formulating my plan of how to attack this. A software fix would be great if it is software. Attached are the logs. I also noticed that the disks are all spun-up but the parity sync time is now at 66 days, 23 hours, 18 minutes. I hope it doesn't take that long. tower-diagnostics-20160412-1644.zip
  18. Hey seeing the same thing. I upgraded from 6.1.9 to jumped on the beta train at 6.2.18. Over the weekend i upgraded to .21 and that is when I started to see the same thing. https://lime-technology.com/forum/index.php?topic=48326.0
  19. Here is another diag. The parity rebuild is showing a crazy amount of time. All disks are spun-up. I'm going to try and get back on Beta.20 tower-diagnostics-20160411-1553.zip
  20. Sorry for not adding the Diag stuff. tower-diagnostics-20160411-1155.zip
  21. So after a reboot the server was responsive for about 2hrs. Now it responds to pings. WebUI is slow. I tried to stop array and its stuck. Syslog shows Apr 11 08:59:46 Tower kernel: mdcmd (44): nocheck Apr 11 08:59:46 Tower kernel: md: nocheck_array: check not active Apr 11 08:59:46 Tower emhttp: shcmd (1890): /etc/rc.d/rc.libvirt stop |& logger Nothing else follows. Its been that way for a good 20minutes.
  22. Over the weekend I upgraded from Beta 20 to Beta 21. I also removed a drive that didn't have any data on it and replaced it with what I would use as my 2nd parity drive. Ever since I have done this the server has randomly stopped responding. I'm not sure how long it it taking for this to happen. The majority of the drives are on the onboard SATA ports. I do know the 2nd parity drive is on the RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3) card. I'm seeing this in the syslog. Apr 11 04:30:37 Tower kernel: sas: Enter sas_scsi_recover_host busy: 0 failed: 0 Apr 11 04:30:37 Tower kernel: sas: ata7: end_device-5:0: dev error handler Apr 11 04:30:37 Tower kernel: ata7.00: ATA-9: WDC WD40EFRX-68WT0N0, WD-WCC4E0ETNU17, 82.00A82, max UDMA/133 Apr 11 04:30:37 Tower kernel: ata7.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32) Apr 11 04:30:37 Tower kernel: ata7.00: configured for UDMA/133 Apr 11 04:30:37 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Apr 11 04:30:37 Tower kernel: scsi 5:0:0:0: Direct-Access ATA WDC WD40EFRX-68W 0A82 PQ: 0 ANSI: 5 Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: Attached scsi generic sg7 type 0 Apr 11 04:30:37 Tower kernel: sas: Enter sas_scsi_recover_host busy: 0 failed: 0 Apr 11 04:30:37 Tower kernel: sas: ata7: end_device-5:0: dev error handler Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] 4096-byte physical blocks Apr 11 04:30:37 Tower kernel: sas: ata8: end_device-5:1: dev error handler Apr 11 04:30:37 Tower kernel: ata8.00: ATA-9: WDC WD30EFRX-68AX9N0, WD-WMC1T0378623, 80.00A80, max UDMA/133 Apr 11 04:30:37 Tower kernel: ata8.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32) Apr 11 04:30:37 Tower kernel: ata8.00: configured for UDMA/133 Apr 11 04:30:37 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] Write Protect is off Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] Mode Sense: 00 3a 00 00 Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 11 04:30:37 Tower kernel: scsi 5:0:1:0: Direct-Access ATA WDC WD30EFRX-68A 0A80 PQ: 0 ANSI: 5 Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB) Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: Attached scsi generic sg8 type 0 Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] 4096-byte physical blocks Apr 11 04:30:37 Tower kernel: sas: Enter sas_scsi_recover_host busy: 0 failed: 0 Apr 11 04:30:37 Tower kernel: sas: ata7: end_device-5:0: dev error handler Apr 11 04:30:37 Tower kernel: sas: ata8: end_device-5:1: dev error handler Apr 11 04:30:37 Tower kernel: sas: ata9: end_device-5:2: dev error handler Apr 11 04:30:37 Tower kernel: ata9.00: ATA-7: Hitachi HDT725050VLA360, VFD400R403SMYC, V56OA52A, max UDMA/133 Apr 11 04:30:37 Tower kernel: ata9.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32) Apr 11 04:30:37 Tower kernel: ata9.00: configured for UDMA/133 Apr 11 04:30:37 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] Write Protect is off Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] Mode Sense: 00 3a 00 00 Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 11 04:30:37 Tower kernel: scsi 5:0:2:0: Direct-Access ATA Hitachi HDT72505 A52A PQ: 0 ANSI: 5 Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: [sdj] 976773168 512-byte logical blocks: (500 GB/466 GiB) Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: Attached scsi generic sg9 type 0 Apr 11 04:30:37 Tower kernel: sas: Enter sas_scsi_recover_host busy: 0 failed: 0 Apr 11 04:30:37 Tower kernel: sas: ata7: end_device-5:0: dev error handler Apr 11 04:30:37 Tower kernel: sas: ata8: end_device-5:1: dev error handler Apr 11 04:30:37 Tower kernel: sas: ata9: end_device-5:2: dev error handler Apr 11 04:30:37 Tower kernel: sas: ata10: end_device-5:3: dev error handler Apr 11 04:30:37 Tower kernel: ata10.00: ATA-8: OCZ-AGILITY3, OCZ-S412FE6GEZ2441GM, 2.22, max UDMA/133 Apr 11 04:30:37 Tower kernel: ata10.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32) Apr 11 04:30:37 Tower kernel: ata10.00: configured for UDMA/133 Apr 11 04:30:37 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: [sdj] Write Protect is off Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: [sdj] Mode Sense: 00 3a 00 00 Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: [sdj] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 11 04:30:37 Tower kernel: sdh: sdh1 Apr 11 04:30:37 Tower kernel: sd 5:0:0:0: [sdh] Attached SCSI disk Apr 11 04:30:37 Tower kernel: scsi 5:0:3:0: Direct-Access ATA OCZ-AGILITY3 2.22 PQ: 0 ANSI: 5 Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: [sdk] 234441648 512-byte logical blocks: (120 GB/112 GiB) Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: Attached scsi generic sg10 type 0 Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: [sdk] Write Protect is off Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: [sdk] Mode Sense: 00 3a 00 00 Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 11 04:30:37 Tower kernel: sdk: sdk1 Apr 11 04:30:37 Tower kernel: sd 5:0:3:0: [sdk] Attached SCSI disk Apr 11 04:30:37 Tower kernel: sdj: sdj1 Apr 11 04:30:37 Tower kernel: sd 5:0:2:0: [sdj] Attached SCSI disk Apr 11 04:30:37 Tower kernel: sdi: sdi1 Apr 11 04:30:37 Tower kernel: sd 5:0:1:0: [sdi] Attached SCSI disk Apr 11 04:30:37 Tower kernel: BTRFS: device fsid 39df29e1-dd7e-48dd-8c0e-ff4f9d9353c1 devid 2 transid 1060577 /dev/sdk1 I already tried to tell the disks not to spin down thinking that it might be related to the disks not spinning back up in a timely matter. But that wasn't it. I ran out of time but tonight I was going to try and reset the config and remove the 2nd Parity drive to see if the server does the same thing. I think that might be simplest step to try. Please let me know if there are any other logs that you want or other things to try.
  23. In reading the release notes it says that Yes Nested VMs works. I have tried a bunch of different settings for the VM and each time i get a PSOD on VMware 5.5 Update 3 installer. I'm guessing its something dumb like not knowing all the template settings. Maybe I overlooked the post that has the settings. Either way can we get a template for ESX?
  24. I agree that this operation only takes a few minutes, but when I have time to do it people are accessing resources on the array. So instead of scheduling the array to do it I have to schedule another time for myself to do it. I agree the pre-clear script needs to be there first but this would be a great addition.