bennymundz

Members
  • Posts

    111
  • Joined

  • Last visited

Everything posted by bennymundz

  1. Hi Unraid team, I had a disk die (disk4 in my array) in my unraids, when I was in the process of replacement of the one dead disk (disk 4 data disk), a second (disk 1 data disk) decided to also give up. I pulled both out of the unraid and connected to USB to SATA device to check how dead they were with some tools. One is completely dead and will not even detect, however the 2nd (the more recent) failure was actually readable via Paragon through windows. (She isn't healthy tho) My question is, is there a way to force unraid to believe (data disk 1) is okay, so I can use parity to rebuilt data disk 4, then once that is rebuilt use parity to rebuild on a new disk data disk 1? data disk 4 simply wont detect - so no smart reports data disk 1 fails smart test at either 10% or 90% I run one parity disk and both drive failures are on data disks. Parity drive is good.
  2. Sorry my fault, I should have explained... It just hangs at the last line and never progresses. I am going to be looking into what is in PCI slot 9 where it hangs, clearly that card doesn't want to play nice and then working backwards from there today. It's something up with the linux builds in those later versions 6.1.x linux versions
  3. 6.12.0 and 6.12.1 both killed my unraid box. I manually downgraded to 6.11.5 and back up and running fine. I tried disabling the power management in bios and upgrading to latest bios with no avail.
  4. Binhex Sonarr/Radarr along with the unify video, UNMS, and Unify controller were all present.
  5. VM's stable after 1 week. The corrupted dockers which all ran without issue were the problem. Dockers were deleted and everything started working as expected.
  6. @jonp I will do, I've successfully had 3 vm's running for 2 days now. Before deleting those dockers i would be lucky to get 2 hours out of them. After a week i will consider this resolved. For now im putting this down to corrupted docker config causing pain, perhaps locking resources causing kvm to kill VM's
  7. I had a similar issue. It looks that some dockers were causing a headache for me. If you want to give it a try, I'd recommend deleting all your docker and associated configs then trying to run your VM's and see how it goes. Be sure to make any necessary config backups of your docker, before you delete them and their configs.
  8. @jonp - I will do that and report back. However I think i might have fixed the issue, there were some dockers which were working fine but for some reason could not be updated. (I relalise my issue was with VM's). I ditch all dockers and deleted the docker configs and the VM's have all been stable. I did notice the dockers were using an unusual amount of CPU which lead me to trash all of them, perhaps the config of the dockers was corrupted.
  9. @John_M mrblack-diagnostics-20190305-2219.zip New diags with log files populated. Please let me know if you need anything else.
  10. @John_M that's weird, i will do that and post back. Thanks.
  11. Im exactly the same, my unraid box was fine running multiple VM's for months, the i upgraded now no VM's work longer than 24hrs.
  12. I've since updated back to 6.7 RC5 trying to fix this no avail. I have attempted the following - deleted /mnt/user/system/libvirt/libvirt.img and let it be recreated, no resolution - Increase the size of /mnt/user/system/libvirt/libvirt.img to 2gb from 1gb - created VM xmls - full power cycled my system Again this morning woke up, VM had crashed again libvirt log 2019-03-04 13:00:00.362+0000: 6701: info : libvirt version: 4.10.0 2019-03-04 13:00:00.362+0000: 6701: info : hostname: mrblack 2019-03-04 13:00:00.362+0000: 6701: warning : qemuDomainObjTaint:7831 : Domain id=1 name='AMS01' uuid=61fa935c-ce3b-6c32-3dcd-cea3cece8ee1 is tainted: high-privileges 2019-03-04 14:03:07.467+0000: 6697: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Would love it if anyone had any suggestions at all. After a VM crashes, until i reboot i get this error Execution error internal error: process exited while connecting to monitor: qemu: qemu_thread_create: Resource temporarily unavailable mrblack-syslog-20190304-2238.zip mrblack-diagnostics-20190304-2248.zip
  13. Hello all, I am hoping someone might be able to assist. I recently upgraded to the latest RC5 and noticed weirdness on my unraid box so i decided to downgrade back to the latest stable release 6.6.7 to fix the issue. However my problem is that my vm's are all still crashing. Until i did the upgrade everything was running perfect no issue for 60+ days humming along nicely and now i cannot even get 60min out of it before my VM's crash, This is the log in libvert, tho i dont specifically know what it means. 2019-03-02 10:30:03.592+0000: 6591: info : libvirt version: 4.7.0 2019-03-02 10:30:03.592+0000: 6591: info : hostname: mrblack 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: high-privileges 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: host-cpu 2019-03-02 10:30:03.780+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=2 name='AMS01' uuid=d80df609-ca7b-33e2-ab90-59de51a176af is tainted: high-privileges 2019-03-02 10:30:03.992+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=3 name='DLH01' uuid=c02d5c00-ad2c-c6e9-6be2-ca553682a971 is tainted: high-privileges 2019-03-02 10:57:37.319+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor 2019-03-02 11:33:18.005+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Really hoping someone can point me in the right direction to fix this annoying issue. Thanks
  14. Oh my this would be insanely annoying to me if this were implemented.. But who put's their unraid box on open internetz haha seems like a rookie error to me. Spin up a jump box and ssh to that or as someone else said use VPN.
  15. Changed Status to Solved
  16. Classic case of not RTFM 🙂 - Fixed
  17. I ran the upgrade and my unraid box boots with the following error and results in failed web gui loading pic attached. Loading in safe mode all VM's and dockers load fine without issue as i would exprect. I guess i have some legacy plugin (I've been with unraid for many years) which is breaking things for me but not sure what or what to do to fix. I have added my plugins directory from /boot/config/plugins
  18. Gee wizz people are creative. My attempt would have been along the line of the OP, but i would have forgotten to use colour. Great work to all the submitters you are really skillful.
  19. Hi Tom, are there still plans to upgrade to NFSv4 ?There have been a few RC releases since this post but i see NFSv3 is still the implemented option.
  20. I asked this in the general support forum without realising that there is a thread for this plug-in. Can someone please tell me how i would go about assigning a NFS share to a disk which is mounted by unassigned devices. The disk gets mounted, a SMB share get created but i would like to assign a NFS share to it. I would appreciate any guidance / assistance provided.
  21. Hello All, I know this is most likely pretty simple and im missing something here, however i can't figure it out and are hoping someone might be able to point me in the right direction. I have a disk which i am wanting for CCTV footage which i want that outside of my array. It is being mounted to the following /mnt/video I would like to now have a disk share created and assign the share to /mnt/video via NFS. I see that there is already been a SMB share created by unassigned devices but for the purpose of this exersize i want to employ NFS. I tired creating a NFS share under unassigned devices and then assigning the mount point of that NFS share /mnt/video but that didn't seem to work so it's back to the drawing board.Can someone please tell me what I'm missing here.
  22. I got it working changing the following - ports changed in bright green >shinobicctv/shinobi</Repository> <Registry>https://hub.docker.com/r/shinobicctv/shinobi/~/dockerfile/</Registry> <Network>bridge</Network> <Privileged>false</Privileged> <Support>https://hub.docker.com/r/shinobicctv/shinobi/</Support> <Overview>Streams by WebSocket, and Save to WebM. Shinobi can record IP Cameras and Local Cameras. &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt;&#xD; &#xD; If you used the default databse the login credentials for the WebUI are: &amp;lt;br /&amp;gt;&#xD; Username : [email protected] &amp;lt;br /&amp;gt;&#xD; Password : password &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt;&#xD; &#xD; To change these credentials after logged in. &amp;lt;br /&amp;gt;&#xD; &#xD; 1. Login &amp;lt;br /&amp;gt;&#xD; 2. Click user email located at the top right of the dashboard. &amp;lt;br /&amp;gt;&#xD; 3. Open Settings. &amp;lt;br /&amp;gt;&#xD; 4. Change details to your liking. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&#xD; [b]Converted By Community Applications[/b] </Overview> <Category/> <WebUI>http://[IP]:[PORT:8083]</WebUI> <TemplateURL/> <Icon>https://pbs.twimg.com/profile_images/850880216666849280/c2iECrEE.jpg</Icon> <ExtraParams/> <DateInstalled>1495465372</DateInstalled> <Description>Streams by WebSocket, and Save to WebM. Shinobi can record IP Cameras and Local Cameras. &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt;&#xD; &#xD; If you used the default databse the login credentials for the WebUI are: &amp;lt;br /&amp;gt;&#xD; Username : [email protected] &amp;lt;br /&amp;gt;&#xD; Password : password &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt;&#xD; &#xD; To change these credentials after logged in. &amp;lt;br /&amp;gt;&#xD; &#xD; 1. Login &amp;lt;br /&amp;gt;&#xD; 2. Click user email located at the top right of the dashboard. &amp;lt;br /&amp;gt;&#xD; 3. Open Settings. &amp;lt;br /&amp;gt;&#xD; 4. Change details to your liking. &amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&#xD; [b]Converted By Community Applications[/b] </Description> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>8080</HostPort> <ContainerPort>8080</ContainerPort> <Protocol>tcp</Protocol> </Port> <Port> <HostPort>3314</HostPort> <ContainerPort>3314</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data> <Volume> <HostDir>/mnt/user/videos/</HostDir> <ContainerDir>/opt/shinobi/videos</ContainerDir> <Mode>rw</Mode> </Volume> <Volume> <HostDir>/mnt/user/appdata/shinobi</HostDir> <ContainerDir>/var/log/mysql/</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Environment/> <Config Name="Host Path 1" Target="/opt/shinobi/videos" Default="/opt/shinobi/videos" Mode="rw" Description="Container Path: /opt/shinobi/videos" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/videos/</Config> <Config Name="WebUI port" Target="8080" Default="" Mode="tcp" Description="Container Port: 8080" Type="Port" Display="always" Required="true" Mask="false">8080</Config> <Config Name="Port" Target="3314" Default="" Mode="tcp" Description="Container Port: 3314" Type="Port" Display="always" Required="true" Mask="false">3314</Config> <Config Name="MySQL logs" Target="/var/log/mysql/" Default="/mnt/user/appdata/shinobi" Mode="rw" Description="Container Path: /var/log/mysql/" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/appdata/shinobi</Config> </Container>
  23. Any updates on this ? I would also love to run Shinobi as a docker
  24. Agree with first point that's for sure.