Madman2012
-
Posts
24 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Madman2012
-
-
-
Please help. I am transferring files from a NVME drive on a Windows 11 machine to my Unraid server and I am getting 100kbs in transfer speeds? They are both over 1Gbs ethernet connection. I don't know why this is so slow. Can someone please help?
The server has a cache SSD drive as well, so I dont get it at all.
TIA -
I am getting the same error. Did you ever resolve this? If so what was it? I am going to steup a syslog server.
-
@juan11perezThank you!! How do we get this in the CA store? Would be great for more people to use it to tinker.
- 1
-
Looks like I got it to install with your excellent directions via the terminal - but it appears as an orphaned image in the docker tab. Can you explain how I may resolve this? Thanks in advance!
-
Is there a way to partially restore app data, ie select the apps I want to restore?
I ask because I ran into trouble with cache filling up and crashed all the dockers. When that happened I reconfigured the cache and share config and now the cache drives are separate pools based on IO function, like VMs and Docker instead of one giant 4tb BTRFS pool per SpaceInvaders latest 6.9 configuration video. The issue is the app file restore data file now saturates the docker cache pool as it shrunk from one 4tb pool to 3 separate drives, 2x 1tb NVME and 1x 2tb SSD.
Help!
Past my depth right now.
Thank you in advance!
-
I got the linuxserver container working with hardware transcoding. I unticked and Re ticked the hardware transcoding options in the container and force upgraded the container and it seems to be working now! Thank you all!!
Sent from my iPhone using Tapatalk- 1
-
Just tried changing the name of the image to binhix from linuxserver, it pulls and installs with no complaining but then it will not load the webui. I suspect ports need remapping and directories need changing to suit binhix architecture. I think you had more luck because you were going from binhix to binhix container and not linuxserver to binhix. I can do more investigation.
-
Sure, I will test out and report back.
I am trying to find an elegant way to easily use the binhix container that without having to rebuild the entire media database. If anyone has any suggestions for how to do that easily I would appreciate it.
For instance is it possible to make the binhix app data and media container mappings the same as the existing Linuxserver's container? Obviously I wouldnt change the containers mappings just the host's.
-
Yes it’s Linuxserver’s Plex and I have Plexpass. Latest version.
Sent from my iPhone using Tapatalk -
I am unable to get the linux server version to hardware transcode with beta 35 installed and all the right configurations in the docker. NVIDIA SMI can be seen in the consol but the application is not using it to decode files. Anyone else having this issue?
-
Yes downloaded the right version of the files after registering on NVIDIAs site and changed the variables in the script to the correct names and placed the files in the correct directory.
Sent from my iPhone using Tapatalk -
On 11/3/2020 at 4:27 PM, Jaburges said:
Did you use the right versions of the files - it looks like CUDA 11 vs 10.2.
check the top of the opencv.sh and notice the file namesCUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb CUDA_PIN=cuda-ubuntu1804.pin CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub CUDA_VER=10.2
Are your file names the same? Use the archived files list on the nvidia site to locate them - i'm assuming i'm not allowed to copy them here due to licensing etc etc.
If you downloaded newer versions, did you change the filenames? (I have no idea what is / isnt supported - all I know its a PITA if you mismatch the GPU driver, CUDA and CUDNN stuff)I had to use Cuda 11 since I am on unraid beta 30 and that was what the nvidia smi reported (below), please let me know if it is not supported? It looks like it have correct versions of the files in the script variables. I can get Cuda enabled but CUDNN does not come on like before.
root@d39bb87ce369:/# nvidia-smi
Sat Nov 7 10:03:47 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P2200 Off | 00000000:04:00.0 Off | N/A |
| 55% 50C P0 22W / 75W | 978MiB / 5059MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+ -
I had the same result as you before I tried to compile myself and ended up where I was in my earlier post.
Sent from my iPhone using Tapatalk -
Hi @dlandon would you kindly help me debug my opencv Zoneminder install? I have attached the zip file as instructed in the logs. I cannot get Cudnn to show as enabled in the cmake output. I have been working on this for a solid day and am relatively new with ZM and would greatly appreciate any help getting GPU support enabled. Thank you in advance!
GTK+: NO
-- VTK support: NO
--
-- Media I/O:
-- ZLib: build (ver 1.2.11)
-- JPEG: libjpeg-turbo (ver 2.0.2-62)
-- WEBP: build (ver encoder: 0x020e)
-- PNG: build (ver 1.6.37)
-- TIFF: build (ver 42 - 4.0.10)
-- JPEG 2000: build (ver 1.900.1)
-- OpenEXR: build (ver 2.3.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- DC1394: NO
-- FFMPEG: NO
-- avcodec: NO
-- avformat: NO
-- avutil: NO
-- swscale: NO
-- avresample: NO
-- GStreamer: NO
-- v4l/v4l2: YES (linux/videodev2.h)
--
-- Parallel framework: pthreads
--
-- Trace: YES (with Intel ITT)
--
-- Other third-party libraries:
-- Intel IPP: 2019.0.0 Gold [2019.0.0]
-- at: /root/opencv/build/3rdparty/ippicv/ippicv_lnx/icv
-- Intel IPP IW: sources (2019.0.0)
-- at: /root/opencv/build/3rdparty/ippicv/ippicv_lnx/iw
-- Lapack: NO
-- Eigen: NO
-- Custom HAL: NO
-- Protobuf: build (3.5.1)
--
-- NVIDIA CUDA: YES (ver 11.0, CUFFT CUBLAS FAST_MATH)
-- NVIDIA GPU arch: 30 35 37 50 52 60 61 70 75
-- NVIDIA PTX archs:
--
-- cuDNN: NO
--
-- OpenCL: YES (no extra features)
-- Include path: /root/opencv/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python (for build): /usr/bin/python3
--
-- Java:
-- ant: NO
-- JNI: NO
-- Java wrappers: NO
-- Java tests: NO
--
-- Install to: /usr/local
-- -----------------------------------------------------------------
--
-- Configuring incomplete, errors occurred!
See also "/root/opencv/build/CMakeFiles/CMakeOutput.log".
See also "/root/opencv/build/CMakeFiles/CMakeError.log". -
Trying to download the latest drivers after 6.9.0 beta 30 update and the download is frozen at 33%. Any suggestions on how to proceed?
EDIT: Eventually it downloaded and worked.
-
On 6/6/2020 at 9:32 AM, Madman2012 said:
Has anyone passed the GPU through to Shinobi on Unraid docker? I have tried unsuccessfully but have it working with Plex.
Anyone out there successful?
-
Has anyone passed the GPU through to Shinobi on Unraid docker? I have tried unsuccessfully but have it working with Plex.
-
-
Ok so I tried to edit the xml file to mount the drives. It appeared to work for the first drive but after I added the others now I get a different error when trying to start the VM. Below is the log. Please I kindly would appreciate the help as I am trying to recover the data from this device and use unraid going forward.
Quote-device ide-hd,bus=ide.3,drive=libvirt-7-format,id=sata0-0-3,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_S4D0LSW7","node-name":"libvirt-6-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-6-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-6-storage"}' \
-device ide-hd,bus=ide.4,drive=libvirt-6-format,id=sata0-0-4,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_S4D0PBKR","node-name":"libvirt-5-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-5-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-5-storage"}' \
-device ide-hd,bus=ide.5,drive=libvirt-5-format,id=sata0-0-5,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_Z4D1ECBT","node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-4-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-4-storage"}' \
-device ide-hd,bus=ide.6,drive=libvirt-4-format,id=sata0-0-6,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z301XY3E","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-hd,bus=ide.7,drive=libvirt-3-format,id=sata0-0-7,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z300X4MA","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-hd,bus=ide.8,drive=libvirt-2-format,id=sata0-0-8,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z300X9RS","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=ide.9,drive=libvirt-1-format,id=sata0-0-9,write-cache=on \
-fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/Media/ \
-device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=Media,bus=pci.1,addr=0x0 \
-netdev tap,fd=36,id=hostnet0,vhost=on,vhostfd=37 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:92:93:93,bus=pci.3,addr=0x0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=38,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=2 \
-vnc 0.0.0.0:0,websocket=5700 \
-k en-us \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
-device usb-host,hostbus=1,hostaddr=3,id=hostdev0,bus=usb.0,port=1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2020-05-01 19:48:13.804+0000: Domain id=28 is tainted: high-privileges
2020-05-01 19:48:13.804+0000: Domain id=28 is tainted: host-cpu
char device redirected to /dev/pts/1 (label charserial0)
2020-05-01T19:48:13.937495Z qemu-system-x86_64: -device ide-hd,bus=ide.6,drive=libvirt-4-format,id=sata0-0-6,write-cache=on: Bus 'ide.6' not foundHere is the XML:
QuoteVersion: 6.8.3
Server
Description
Registration
UptimeTitan • 10.0.1.5
Media server
Unraid OS Trial24 days remaining
23 hours, 55 minutesDASHBOARD
MAIN
SHARES
USERS
SETTINGS
PLUGINS
DOCKER
VMS
APPS
STATS
TOOLS
Update VM
XML VIEW
Icon:
Autostart:
NO
1
<?xml version='1.0' encoding='UTF-8'?>
2
<domain type='kvm'>
3
<name>Ubuntu</name>
4
<uuid>bc82ea8c-20a7-50b0-4432-06c988b1ce06</uuid>
5
<metadata>
6
<vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>
7
</metadata>
8
<memory unit='KiB'>7864320</memory>
9
<currentMemory unit='KiB'>1048576</currentMemory>
10
<memoryBacking>
11
<nosharepages/>
12
</memoryBacking>
13
<vcpu placement='static'>8</vcpu>
14
<cputune>
15
<vcpupin vcpu='0' cpuset='0'/>
16
<vcpupin vcpu='1' cpuset='16'/>
17
<vcpupin vcpu='2' cpuset='1'/>
18
<vcpupin vcpu='3' cpuset='17'/>
19
<vcpupin vcpu='4' cpuset='2'/>
20
<vcpupin vcpu='5' cpuset='18'/>
21
<vcpupin vcpu='6' cpuset='3'/>
22
<vcpupin vcpu='7' cpuset='19'/>
23
</cputune>
24
<os>
25
<type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
26
<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
27
<nvram>/etc/libvirt/qemu/nvram/bc82ea8c-20a7-50b0-4432-06c988b1ce06_VARS-pure-efi.fd</nvram>
28
</os>
29
<features>
30
<acpi/>
31
<apic/>
32
</features>
33
<cpu mode='host-passthrough' check='none'>
34
<topology sockets='1' cores='4' threads='2'/>
35
<cache mode='passthrough'/>
36
</cpu>
37
<clock offset='utc'>
38
<timer name='rtc' tickpolicy='catchup'/>
39
<timer name='pit' tickpolicy='delay'/>
40
<timer name='hpet' present='no'/>
41
</clock>
42
<on_poweroff>destroy</on_poweroff>
43
<on_reboot>restart</on_reboot>
Array StartedUnraid® webGui ©2020, Lime Technology, Inc. manual
-
Same issue here trying to pass a drive but it will not allow the changes to persist.
-
Thank you for this! I got the VM up and running, finally but cannot pass the drives yet...
Now the issue I am having is persisting settings to pass the physical drives through to the VM. When I follow your instructions above, I hit update button and it just stays there with updating... but never provides a confirmation and if I refresh the settings didnt persist. I deleted the VM and started over using the same and I am having the same issue with the settings not being confirmed. I have even restarted the server, tried different browsers. Not sure what is going on ATM. Screen shot below.
Any advice? I sincerely appreciate the community's help on this project. I am starting to get a feel for the power of Unraid but wish I could get this array mounted and transfered.
-
Noob in need of help.
I am new to the community and have just completed my first server build after our Synology 916+ died during COVID-19. I decided to switch from Synology products for many reasons, but I am not a Linux expert . So now the rubber meets the road.
I was able to configure and install Unraid with the Space Invaderone guides and finally got a small volume up and running (no cache or parity yet).
Now I need to mount and begin the transfer of my Synology SHR volume's content to the new Unraid volume. I have 14 drives mounted in a Supermicro 846 chasis, 7 are part of the SHR volume the remaining are unassigned. Currently, Synology SHR drives are listed as "unassigned devices" and I have made them read-only to keep things safe. The SHR volume mostly contains movies, so I am not too fussed if something goes wrong during transfer, but I would like to do it as safely and quickly as possible.
I planned to mount the Synology drives inside on Ubuntu but do it in a VM as detailed in Synology's recovery instructions (https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC) and expand the Unraid volume and rsync important shares back to the unraid volume.
Is this possible? Does the community have a better method?
As mentioned in the title, the Synology 916+ is inoperable. Therefore I don't have access to that device anymore, and the standalone desktop PC that we have running Windows10 does not have the SATA ports or chassis to support the drives that I already have physically mounted in the Supermicro 4U chassis.
Help please! So much appreciated!!
Stay safe, everyone!
Supermicro X9DR3-F Version 0123456789
BIOS: American Megatrends Inc. Version 3.3. Dated: 07/12/2018
CPU: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz
HVM: Enabled
IOMMU: Enabled
Cache: 512 KiB, 2048 KiB, 20480 KiB, 512 KiB, 2048 KiB, 20480 KiB
Memory: 64 GiB DDR3 Multi-bit ECC (max. installable capacity 192 GiB)
Network: bond0: fault-tolerance (active-backup), mtu 1500
eth0: interface down
eth1: 1000 Mbps, full duplex, mtu 1500
eth2: interface down
Kernel: Linux 4.19.107-Unraid x86_64
OpenSSL: 1.1.1d
Uptime: 0 days, 08:18:13
Super slow transfer speeds via SMB3 connection.
in General Support
Posted
Thank you. Frigate was the offending container. I have turned it off (bad config and auto restart enabled) and I will wait a few days and reupload the syslog files. The slow transfer issue I have had for a very long time with this server and I haven't been able to pin down the issue. Internal transfers / disk to disk or cache seem fine. Over the internet downloads and whatnot all seem fine too. Its only when I am transferring files from a network PC via a wired Lan connection. I have verified that Windows 11 is connecting with SMB3, so I am still scratching my head about what is causing the issue. Thank you for your help!