acbaldwi
Members-
Posts
97 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by acbaldwi
-
Thanks!
-
Hello all, im getting a predictive fail on a ssd im using for vm's can someone take a look and see if i should be replacing it asap, and lastly if i have the pool configured correctly so if it does fail the data is on both drives TIA argos-diagnostics-20211004-1638.zip
-
smart report attached, Drive is a lil old and this just popped this morning, smart attached below should i be planning on replacing this disk or am i good to go for a bit? Drive is in a supermicro 24 bay server, no sata cables to reattach, i have reseated it in the backplane (seemed fine) TIA argos-smart-20210525-0753.zip
-
Good Evening, Let me first start by saying this is a ton of work and its really appreciated After installing the various bits and bobs i've been unable to get most of my panels working and wondered if you knew where I went wrong (i'm sure its me cause well its working for everyone else.......) Any help would be greatly appreciated
-
I have a funny feeling that may be the root of all my current evils though i have the data share set to use cache but then write to the array when it fills it appears that it is not doing so and thus fills and kills.... i guess for now ill make it dl to the array direct see if that helps.... seems to be a waste of 2x 1tb nvme cache drives lol
-
Thanks again, Here is radarr root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='binhex-radarr1' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '9878:7878/tcp' -v '/mnt/user/data/':'/data':'rw' -v '/mnt/user/appdata/binhex-radarr':'/config':'rw' 'binhex/arch-radarr' e20cc14ece897e91a340f8510d4b4eed921de5252b1e065b1480f23d22df8e08 The command finished successfully! Here is sabnzb root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='binhex-sabnzbd1' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8080:8080/tcp' -p '8090:8090/tcp' -v '/mnt/user/data/usenet/':'/data/usenet':'rw' -v '/mnt/user/appdata/binhex-sabnzbd':'/config':'rw' 'binhex/arch-sabnzbd' 89c5f02a57a878eed0396ff5faae5c38643f914c2d6858b74e298e3df7ba0106 Here is Sonarr root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='binhex-sonarr1' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '9989:8989/tcp' -p '9899:9897/tcp' -v '/mnt/user/data/':'/data':'rw' -v '/mnt/user/appdata/binhex-sonarr':'/config':'rw' 'binhex/arch-sonarr' 3375b940fc90f617b50ff41e9729d06dbbca3c89e14b591ae521c527b06e8788 Here is plex root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='plex21' --net='host' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'VERSION'='docker' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/Tv/':'/tv':'rw' -v '/mnt/user/':'/music':'rw' -v '/mnt/user/Transcode/plextmp/':'/plextranscode':'rw' -v '/mnt/user/Movies_archive/':'/archivemovies':'rw' -v '/mnt/user/Tv_Archive/':'/archivetv':'rw' -v '/mnt/user/Home_Movies/':'/homemovies':'rw' -v '/mnt/user/TV_RECORDINGS/':'/recordedtv':'rw' -v '':'/transcode':'rw' -v '/mnt/user/data/media/':'/data':'rw' -v '/mnt/user/appdata/plex':'/config':'rw' 'linuxserver/plex' d6079193cd6ce613cd3e2773a255852cd23a17dd84127fcb49a9108f1ca680c7
-
so quick update i was able to get it to reboot (took like 15 minutes till it shutdown for it to reboot gracefully) sicne then i restarted it with only the plex docker ran a show for about say 10 minutes before plex froze. I then tried to stop plex and it is "spinning" and never actually stops i see this in the log right now Apr 7 15:37:47 Argos nginx: 2021/04/07 15:37:47 [error] 10243#10243: *3831 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.147, server: , request: "POST /plugins/dynamix.docker.manager/include/Events.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "argos.local", referrer: "https://argos.local/Docker" but i think thats just my local pc timing out on the web gui
-
Hello, Been trying to download a rather large amount of data via sabnzbd, the dockers freeze up then I'm unable to even stop the array and reboot the machine without hard powering it off then back on. Tried rebuilding the docker.img and that didn't seem to help. At this point im at a loss if it is sabnzbd or something else horribly wrong here..... I will say sabnzbd fills up the cache (within say 50gb) then refuses to continue to write to the array even though the share is set to "yes: cache" any help will be appreciated TIA argos-diagnostics-20210407-1501.zip
-
file/shares stop being accessible and dockers wont start
acbaldwi replied to acbaldwi's topic in General Support
No not recently, i did have a single cache drive and added a second one to mirror it in case i ever lost a ssd -
file/shares stop being accessible and dockers wont start
acbaldwi replied to acbaldwi's topic in General Support
I've got crashplanpro running and it was consuming alot of space as it cached things if i recall right,,,, -
file/shares stop being accessible and dockers wont start
acbaldwi replied to acbaldwi's topic in General Support
BTT anyone see anything in therei've perused the logs and dont see anything major -
After usually a few days of running unraid my file shares typically become unavailable and also dockers won't start running. The only solution seems to be a reboot/restart of the pool I was finally able to download the logs after it happened, was ~2:30pm today local time Server is a dual xeon e5-2600 v2 with 112gb ram, Array is 15 disks plus 2 cache. Hopefully someone can dissect why it hates me so much argos-diagnostics-20200715-1351.zip
-
I got mine resolved... what happened was i had originally added in the usb devices before i had added the new pci card that originally contained them. To solve it remove the new pci info you placed into the boot script on the flash drive, reboot then in your vm remove the old usb devices, add the pci info back into flash, reboot and add the pci card to the vm... works like greased buttah now
-
Getting the error "internal error: unknown pci source type 'vendor'" when trying to pass through a usb pci card to a vm... Here is my xml of the vm, and attached is a pic of the gui of the pci card im trying to add <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Ausprey</name> <uuid>7d8dbbd8-82b3-5995-34dd-272476139480</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='20'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='21'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='22'/> <vcpupin vcpu='6' cpuset='11'/> <vcpupin vcpu='7' cpuset='23'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/7d8dbbd8-82b3-5995-34dd-272476139480_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Ausprey/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:d1:c3:44'/> <source bridge='virbr0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x82' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x82' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc534'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain>
-
Attached are the logs form my syslog server, sorry they are quite big SyslogCatchAll-2020-04-16.zip