Pixel5

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by Pixel5

  1. @mmz06 heres a copy of the entire log, hopefully you find something that could solve this. Log.txt
  2. initially i was using another plex container but i switched over to linuxserver.io with the exact same result. The CPU im using is the i3-10100 above these lines is a big empty section and the next that follows above is this WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Reading package lists... Building dependency tree... Reading state information... The following packages will be REMOVED: binfmt-support binutils binutils-common binutils-x86-64-linux-gnu cmake-data cpp cpp-9 dpkg-dev fakeroot g++ g++-9 gcc gcc-9 gcc-9-base git-man less lib32gcc-s1 lib32stdc++6 libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libarchive13 libasan5 libatomic1 libbinutils libbsd-dev libc-dev-bin libc6-dev libc6-i386 libcbor0.6 libcc1-0 libclang-common-7-dev libclang1-7 libcrypt-dev libctf-nobfd0 libctf0 libcurl3-gnutls libdpkg-perl libdrm-amdgpu1 libdrm-nouveau2 libdrm-radeon1 libedit2 libegl-dev libegl-mesa0 libegl1 libelf1 liberror-perl libfakeroot libffi-dev libfido2-1 libfile-fcntllock-perl libgbm1 libgc1c2 libgcc-9-dev libgdbm-compat4 libgdbm6 libgl-dev libgl1 libgl1-mesa-dri libglapi-mesa libgles-dev libgles1 libgles2 libglib2.0-0 libglib2.0-data libglvnd-dev libglvnd0 libglx-dev libglx-mesa0 libglx0 libgomp1 libicu66 libisl22 libitm1 libjsoncpp1 libllvm10 libllvm7 liblocale-gettext-perl liblsan0 libmpc3 libmpfr6 libncurses-dev libobjc-9-dev libobjc4 libomp-7-dev libomp5-7 libopengl-dev libopengl0 libperl5.30 libpipeline1 libpthread-stubs0-dev libquadmath0 librhash0 libsensors-config libsensors5 libstdc++-9-dev libtsan0 libubsan1 libuv1 libvulkan1 libwayland-client0 libwayland-server0 libx11-dev libx11-xcb1 libxau-dev libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 libxcb-sync1 libxcb-xfixes0 libxcb1-dev libxdamage1 libxdmcp-dev libxml2 libxmuu1 libxshmfence1 libxxf86vm1 linux-libc-dev llvm-7 llvm-7-runtime make manpages manpages-dev mesa-vulkan-drivers netbase opencl-c-headers openssh-client patch perl perl-modules-5.30 shared-mime-info x11proto-core-dev x11proto-dev x11proto-xext-dev xauth xdg-user-dirs xorg-sgml-doctools xtrans-dev 0 upgraded, 0 newly installed, 140 to remove and 7 not upgraded. After this operation, 835 MB disk space will be freed.
  3. same issue same CPU. hopefully this gets fixed in the forseable future.
  4. for me it finishes like this or maybe its not finished yet? its stuck in this state for 10 minutes now. Plex-Media-Server Device open failed, aborting... beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware (If you have multiple ICDs installed and OpenCL works, you can ignore this message) beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware (If you have multiple ICDs installed and OpenCL works, you can ignore this message) Device open failed, aborting... Device open failed, aborting... Device open failed, aborting... beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware (If you have multiple ICDs installed and OpenCL works, you can ignore this message) Device open failed, aborting... cl_get_gt_device(): error, unknown device: ffffffff cl_get_gt_device(): error, unknown device: ffffffff cl_get_gt_device(): error, unknown device: ffffffff Platform #0: Intel Gen OCL Driver Platform #1: Intel Gen OCL Driver
  5. thats exactly why im wondering what is going on, i just transferred 250GB to the Cache drive at 200mb/s without any issue but again the moment i do the same via 10G lan it will slow down to below 50mb/s which makes no sense at all.
  6. but both SSD´s were fine in the system they were in before and both only have like ~20TB written to them. Also i now installed another 8TB drive which currently holds most of my data and im using krusader to copy these files to my main array. The share im currently copying to has cache enabled so currently everything is going onto my cache drive without throttling. So far the slow transfer speed is only there if i transfer via an SMB share.
  7. right i forgot that its only a read test I just received the cable i needed to install all my HDD´s so now i have one of the SSD´s assigned as a cache drive and still one as an unassigned device. When i copy to a share that has cache disabled the transfer is actually faster than going to the cache drive. Ill try to pass through an SSD to a windows VM and see what kind of results i get with the tests i can run on windows.
  8. i now also completed a file transfer with Cache disabled and it was going as fast as the HDD in the NAS would allow. Now the question still is why this only happens with Cache enabled but when i check the cache drive its all good and not showing any issues until it thermal throttles at 80°C
  9. i just tested all drives with Diskspeed docker and while the SSD´s do thermal throttle after a while as expected the never drop below 200mb/s during the test so i would say its unlikely that its a problem with the device itself but im also currently transferring some files to my computer to so i can copy something again with cache disabled.
  10. if i transfer to the unassigned device its slightly faster but there is a still a huge drop ins transfer speed without the device being hot enough to thermal throttle.
  11. Hello People, im still waiting for some cables so as of now im only using my unraid system with two NVME SSD´s installed. Im using one as a volume without parity and one mounted as an unassigned device. Now im playing around a bit to see what kind of transfer speeds i can get and how the system performs in general with unraid and im running into some issued with transfer speeds and general responsiveness of the system. So im transferring some big ISO files from an NVME SSD on my PC via 10G Ethernet to the NVME SSD in the NAS. The transfer starts out at a solid 900 - 1000mb/s which lasts for a few seconds before it rapidly drops to below 50mb/s and finally usually under 25mb/s At the same time the unraid GUI becomes very unresponsive and doing anything else becomes basically impossible regardless to which of the SSD´s i transfer my files. Checking the CPU usage i can see sudden spikes to 100% on individual CPU cores while the other ones are basically at idle. in the reverse direction i usually get a solid 900-1000mb/s without any major slow downs. The SSD temperature in the NAS never exceeds 65°C and i checked some reviews where they said this WD Black model doesnt throttle until it hits 80°C and even if it does its still well over 100mb/s I also tested a slower transfer where i copy from my PC´s HDD to the SSD on the NAS which is capped at 170mb/s due to my HDD but even this one slows down to under 25mb/s. Does anyone have any idea whats going on here exactly? thevault-diagnostics-20201209-0908.zip
  12. so first of all thanks for the help guys and i also just found a way to know which cable is which, there is a subtile difference in the amount of contacts on the SAS side of the cable. The forward cable only has very few connections while the reverse has a ton of contacts so it can carry each independent SATA connection. The difference can be seen on these two cables here, i wish i knew that earlier. https://www.amazon.de/gp/product/B00FOR5M8O/ref=ppx_yo_dt_b_asin_title_o03_s02?ie=UTF8&psc=1 https://www.amazon.de/gp/product/B01F376530/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
  13. well now that i know both versions exist i finally found the correct one and they do indeed look exactly the same. lets see if this works once it arrives.
  14. so how do i know what i need when they are all the same and dont even specify what they are exactly?
  15. i just completed this build so i have never used this cable before, the package just says SFF-8643 to 4x SATA Its a completely passive cable so i would be very surprised if it had any kind of direction its supposed to be used at. Also there is not a single cable like this i could find online that says anything about being directional
  16. i just took basically everything apart and connected via a standard SATA cable and the drive was detected in the BIOS so its most likely the one cable going from the backplane to the SATA ports or both backplanes are broken but that seems very unlikely.
  17. ah just found something on google. No these are two Iron Wolf Pro 8TB drives and one WD Red 3TB drives
  18. just moved the NAS to hook up a monitor and there is actually nothing being detected in the BIOS. That means either my brand new cable is broken or the motherboards SATA ports are all broken. Just tried both of the drive cages to make sure its not the backplane that is broken and its not being detected in either of the cages so i suspect its the cable. Should have ordered two cables just in case...
  19. yea i know that this is why i am not using the one SATA port that is getting disabled. 4 of the 5 other ports have a SATA cable connected and 3 HDD´s are installed. i already double checked my wiring as the im going from 4x SATA to one SFF 8643 backplane but its all good and the drives are also powered up.
  20. heres the diagnostics file thevault-diagnostics-20201207-1425.zip
  21. Hello People, i just finished building my NAS and installing some of my HDD´s and im still on the trial license for now. For some reason all of my HDD´s are not being detected at all and there is no indication as to why that is. The system is detecting both of my NVMe cache drives but nothing that is connected to the SATA ports at all.
  22. how about SSD cache, is there any good setup i could do to improve the performance with that? and generally this hardware config will be fine for unraid or are there any limitations i should expect?
  23. Hello People, im currently building my own NAS to replace my previous Synology NAS. Im doing that both to upgrade to better hardware to run a VM or two (was basically impossible on Synology) and to upgrade to 10G Ethernet where my Synology wasnt even able to deliver a steady 1gig at times. You can see the hardware im going with below and i am wondering if im better served with Unraid or TrueNAS. My research seems to indicate that Unraid would be much better for the VM´s and Docker containers i wanna run but could have a much slower transfer speed overall. Now i know i wont be getting a solid 10G transfer speed with my setup unless i run all HDD´s in RAID0 or transfer exclusively from the SSD´s but i just wanna get a feeling for what i can expect. These HDD´s deliver about 220-250mb/s read speeds without a problem when we talk about sequential reads and a bit less for writes, is it realistic to get that performance in the real world or will it be significantly slower? Also im planning to use one of the SSD´s as a cache and the other to run the VM and docker containers but i could also add more SSD´s later if it would help with the performance of the array. Any recommendations as to what i should try out in terms of settings or what kind of speeds i can expect? PCPartPicker Part List CPU: Intel Core i3-10100 3.6 GHz Quad-Core Processor (€107.90 @ Amazon Deutschland) Motherboard: Gigabyte H470M DS3H Micro ATX LGA1200 Motherboard (€98.99 @ Computeruniverse) Memory: Kingston 16 GB (2 x 8 GB) DDR4-2666 CL13 Memory (€66.90 @ Amazon Deutschland) Storage: Western Digital Black NVMe 250 GB M.2-2280 NVME Solid State Drive (Purchased For €0.00) Storage: Western Digital Black NVMe 250 GB M.2-2280 NVME Solid State Drive (Purchased For €0.00) Storage: Western Digital Red Pro 3 TB 3.5" 7200RPM Internal Hard Drive (Purchased For €0.00) Storage: Seagate IronWolf NAS 8 TB 3.5" 7200RPM Internal Hard Drive (Purchased For €0.00) Storage: Seagate IronWolf NAS 8 TB 3.5" 7200RPM Internal Hard Drive (Purchased For €0.00) Storage: Seagate IronWolf NAS 8 TB 3.5" 7200RPM Internal Hard Drive (Purchased For €0.00) Case: Silverstone CS381B MicroATX Desktop Case (€291.00) Power Supply: Silverstone SFX 450 W 80+ Gold Certified Fully Modular SFX Power Supply (€86.69 @ Alternate) Wired Network Adapter: Asus XG-C100C PCIe x4 10 Gbit/s Network Adapter (€73.90 @ Amazon Deutschland) Total: €725.38 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2020-11-30 11:59 CET+0100