fr05ty

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by fr05ty

  1. i think from memory it use to make unraid freeze up and need a hard reset
  2. did you have to disable the global c-state in the bios?i think that i had to do that with my 1700, i'm still waiting for my M/B to turn up it's a bit depressing just looking at a cpu just sitting alone
  3. @johnnie.black do you know if the ds4243 will benefit from 2 cables from the HBA to iom6 controllers in bay 1 and 3 of the disk shelf, or are they just for redundancy, my google-fu isn't working out well. I have a 9207-8e, all the disks i am going to populate it with are sata3 drives
  4. thanks for that reply and the link, i thought there might have been protocol overheads but i didn't know how much so i was just using the max theoretical bandwidth to try to work out a few number and trying to judge from the speeds i get reported in the unraid gui when doing a scrub, and i wasn't sure how much off a difference there is from a 9211 style card to a 9207 as i have the latter in my system
  5. @BLKMGK sorry if this seems a little long but hopefully it will clear a few things up for you IBM M1015 and Perc H310 are basically 9211-8i from what i understand, so for 1 card quick math is 8 drives x 600 MB/s = 4800 MB/s max throughput, but as it is PCIe 2.0 x8 its max speed is 4096 MB/s so 8 drives could do up to 512 MB/s each. if you take both outputs from 1 card into an extender and have 16 drives off the extender you would have a max speed of 256MB/s per drive if you are only connecting spinning rust to them you may only just get to the limits of the card if doing something like a scrub when reading all drives at once. of course there could be some overhead info that may drop the speed a little. if you had a 9207-8i or similar card that is a PCIe 3.0 x8 card the PCIe 3.0 x8 max throughput is 7.88GB/s so 8 x 600 = 4800 / 16 = 300 MB/s which is still plenty for HDD's. if you only had 1 HBA IBM M1015 or Perc H310 with 1 cable to the extender and 1 to 4 drives 4096 / 2 = 2048 / 20 drives = 102 MB/s and 2048 / 4 drives = 512 MB/s When a scrub is started on my server it tops out around 160MB/s per drive on all 15 disks which is 2400MB/s in total i only have 1 cable to the extender which is 2400MB/s max speed and as it continues through the 4tb drives it will drop to match the drives as the further you get through a disk the read speed slows down when they have finished scrubbing and all the 8tb drives are left to continue it will pick up in speed again. I have a 9207-8i in my server as i didn't want to fuss around with trying to flash a card, these ones just work otb as jbod/i.t. mode. i just started a scrub to take some screen shots which are attached below all my drives are connected/detected as sata3 6Gb/s disks for a bit more info. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @mrbilky a 1950x is not that easy or cheap to come by in my area they are selling for about $1000 kangaroo dollars brand new, a 2950x is about $200 more, I have always had a look online for second hand but never see them for sale and the M/B are about $450 and up, but i never worried to much about price on motherboardsyou pay for features. If these new zen 2 CPU's are as good as all these reviews say they draw about a third (3700x 65w) to about half (3900x 105w) less the power than a 1950x's 180w and these new boards have all the features that i want even if it doesn't cut the mustard for unraid just yet i'm sure it wont take to long before it does catch up for what we want (if it never does then i have a super new gaming pc) and the 6700k still goes to dad rocking an old i3-530. either way i win https://www.cpubenchmark.net/compare/AMD-Ryzen-Threadripper-1950X-vs-AMD-Ryzen-7-3700X-vs-AMD-Ryzen-9-3900X/3058vs3485vs3493
  6. @chris_netsmart ok just going from what wendell says form Level1Linux video (at about 5:50 in video below) iommu doesn't really separate things out in the older boards x370 x470 asmedia chip-set, where the x570 separated it out a lot cleaner, if you look at the asrock board you mentioned i have just pulled this info of the asrock site i dont know anything else about this board AMD Ryzen series CPUs (Summit Ridge and Pinnacle Ridge) - 2 x PCI Express 3.0 x16 Slots (PCIE2: x16 mode; PCIE4: x4 mode)* - 4 x PCI Express 2.0 x1 Slots *Supports NVMe SSD as boot disks If M2_1 is occupied, PCIE4 will be disabled. so if you wanted a nvme in m.2_1 slot, you would only have 1x16 pcie 3.0 slot and the 4 x1 PCIe 2.0 slots for use A single PCIe 2.0 lane. The 6Gbps SATA spec allows for up to 750MB/s of bandwidth, but the PCIe 2.0 x1 interface limits read/write speed to 500MB/s. so passing things through to a VM might be a little trickier on the x1 slots and drive scrubbing could take a little bit longer if you connect a HBA card to one of these slots. if you fill M.2_2 slot you lose 2 sata ports of the 6 onboard but you retain the use if the pcie 3.0 x4 slot, and the nvme slot runs at half the speed of slot 1 if you get/have one of the 3000MB/s ssd's. - 1 x Ultra M.2 Socket (M2_1), M.2 PCI Express module up to Gen3 x4 32Gb/s - 1 x M.2 Socket (M2_2), M.2 SATA3 6.0 Gb/s and M.2 PCI Express up to Gen3 x2 16 Gb/s *M2_2, SATA3_3 and SATA3_4 share lanes. If either one of them is in use, the others will be disabled. **If M2_1 is occupied, PCIE4 will be disabled. i hope this info helps you decide what you want to do on my asus prime x370 pro has IOMMU group 15: [1022:43b9] 02:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset USB 3.1 xHCI Controller (rev 02) [1022:43b5] 02:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset SATA Controller (rev 02) [1022:43b0] 02:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset PCIe Upstream Port (rev 02) [1022:43b4] 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 03:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 03:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1000:0072] 04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) [1b21:1343] 05:00.0 USB controller: ASMedia Technology Inc. ASM1143 USB 3.1 Host Controller [8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
  7. I ended going with a Gigabyte Aorus Master i could not find any retaill shops online with asus ws ace in stock, and a 3700x (for now, I can upgrade my 1700 after) whilst i wait to see what the 3950x performs like in september. i live in B.F nowhare Australia so i didn't expect a local to have if any or at a decent price so i found a reputable shop on ebay that doesn't price jack when there is a sale on, there was a 10% off ebay sale this week M/B was $629 down to $566 (~usd392) and CPU $519 to $467 (~usd323). all AUD prices incl a 10%gst in the final price. They should be here by next monday-ish, here is a passmark comparison of what im going from and to https://www.cpubenchmark.net/compare/AMD-Ryzen-7-3700X-vs-Intel-Xeon-E5-2660-v4-vs-AMD-Ryzen-9-3900X/3485vs2881vs3493 I might have to buy a basic unraid for testing the setup before i jump full in and dismantle my main server, fun times ahead, for testing i might put in gtx 1080(plex) and 680(VM) they will eventually be replaced with my 2080ti(VM's) and p2000(plex), a spare LSI HBA with an old drive or 2 (test out my ds4243 for what will be in the new build) if i don't have to many dramas it runs smooth and plex works as intended i might only upgrade the cpu to the 3900x to give the vm a few more cores. Future plans may include 2x NVME upgrade for the cache (maybe the silicon power 512gb) and OS VM installs(1tb with maybe 2 patitions mostly for windows and small 60GB osx)
  8. @BLKMGK have you looked into getting an intel res2sv240, its a sas extender you can plug 1 or 2 8087 cable(s) from say a 9211-8i to the res2sv240 and have 16-20 (if only using 1 cable) drives in your system i use 1 of these in my current unraid box. You could then have a max of 24 drives connected through a 8i HBA card. The res2sv240 can be powered from a molex 4 pin or from a spare pcie slot. It only requires the power to work so if you need all your slots it can be placed somewhere else in the case. I'm currently looking at doing the same ASUS board with either a 3900x or 3950x making it my gaming rig(2080ti), plex(p2000), and storage(lsi 9207-8e). i just want to see a few people that jump in first give feedback to see if it's worth it
  9. nothing was being used at the time i had my game pc turned off only my phone and lounge pc turned on (osx) then stopped the container changed my router primary dns to pi-hole ip the unraid scrub had returned to normal speeds and if i started SCB again the speed of the scrub dropped again, i used "docker exec -it SteamCacheBundle tail -f /data/logs/access.log" this to check the logs and there was no activity and the memory usage was sitting at about 80MB not 1.8GB, like when i had SCB set as my primary dns
  10. i have noticed a problem with my SteamCacheBundle on the first of every month i start a scrub on unraid and today it was running at 40-60MB/s per drive usually its around 145MB/s i only found it was this docker by shutting them all down 1x1 and when i had them all running and just shutting down SCB everything was running fine, i changed the docker tab to advanced view and noticed that the memory usage was also high and it kept grown in size by about a MB every second it was open, it was running for maybe half hour and was up near 2gb
  11. i was after some info if i pulled my usb stick out of the current setup and put it into the dell 9020 and use my drives in the md1000 and would it boot just fine, i took a pic of the drives just so i knew what 2 are my parity drives if it lost what drives they were. I then wanted to pull my current system down and rebuild it in a different case and have it rack mounted and the ds4243 attached for future hdd upgrades, at the moment i can not pre-clear a drive in the system if i needed to. I have another unraid build that i use as a home media player/lounge gaming and pre clear hdd's in that. I was just guessing before i move the drives I would have to clear my pinned cpu for docker/vm and i would diable the vm (it only has all my steam library installed on it) i have as to not use core resources from the i5, i am running the 6.7.1 rc1 nvidia ver of unraid with a p2000 for plex h/w transcode which wont fit in the sff case of the dell so i thought intel quick sync vdideo might be able to handle that for the time being as i have upto 4 people streaming from plex of a night. i hope that clears it up a bit, thanks
  12. TLDR; want to swap hardware(use current hdd's) temp whilst i rebuild into a rack setup then swap back. I recently purchased a ds4243 with 24 sas 600gb drives and a md1000 with caddies(i love ebay), a 12u open adjustable rack and a 4u rosewill case. I currently have a x99 xeon 14c/24t system 3x ICY BOX IB-565SSK with 13 data and 2 parity and 2x ssd cache, a p2000, and LSI 9207 8i in my current build in a CM storm trooper case, i have an old optiplex 9020 sff i5-4570 16gb ram just sitting around. What i want to do is rebuild my x99 in the 4u rack case and have my parity and ssd's in there and data drives in the ds4243, whilst I'm doing all this i want to temp use the 9020 and the md1000 as a make shift home to my plex server, i have a 9207 8e to connect to the das, so friends and family could still use plex. option 1: Would i be better off starting a trial unraid while i do this update/rebuild, could i just copy my docker app data folders off the cache and on to another ssd for the cache and use my data drives and forgo the parity drives for now. option 2: Use all my drives (i have a pic of what order they are in) and plug my current unraid usb and unpin any cores i have for the vm, and docker containers, into the dell and use intel qsv instead of the p2000 for plex and hope for the best. Is there anything i may have missed all help appreciated
  13. I am using the plexinc docker and had the latest scipt, it ran for a week with no problem, then it crashed so hard in the middle of a stream it caused plex to fail and unraid to not even list my p2000 in the nvidia plugin page, even a power cycle didn't help. I had to do an upgrade to the latest unraid build to get it back, i was on the 6.7 build and now have 6.7.1 rc1 installed. i'm running a x99 and a xeon 14c28t 2.5ghz, 1 vm that is vnc, no card attached to it
  14. no amount of restarts reboots disabling things could bring back my p2000, so my last ditch effort was to upgrade to 6.7.1rc1, I rebooted and I went to the unraid NVIDIA plugin page and my p2000 was listed there again, I will just have to wait until plex decides it wants to implement hw decoding, thanks for the help and great work on these builds
  15. @CHBMB i had been testing the nvdec script as all of my family are outside my home network and mostly are transcode, the server was up and working (after 6.7 update) for just over a week with no drama before going belly up whilst transcoding an episode, now my p2000 does not show up in the unraid nvidia plugin page, i have disabled the vm's from unraid settings and removed the script and rebooted once and also powered off once, but under iommu i can see it listed, do you think trying 6.7.1rc1 install might be woth a shot?
  16. tonight my plex app crashed caused my unraid server to need a reboot upon doing so i had a problem loading plex and emby up, I looked in the logs to see kernel: nvidia-uvm: Loaded the UVM driver in 8 mode, major device number 245 kernel: NVRM: RmInitAdapter failed! (0x31:0xffff:834) kernel: NVRM: rm_init_adapter failed for device bearing minor number 0 iceberg kernel: NVRM: RmInitAdapter failed! (0x31:0xffff:834) kernel: NVRM: rm_init_adapter failed for device bearing minor number 0 rc.docker: PlexMediaServer: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/local/bin/nvidia-container-cli --load-kmods --debug=/var/log/nvidia-container-runtime-hook.log --ldcache=/etc/ld.so.cache configure --ldconfig=@/sbin/ldconfig --device=GPU-cc63faa0-8033-52ac-dad4-79279b371033 --compute --compat32 --graphics --utility --video --display --pid=31670 /var/lib/docker/btrfs/subvolumes/4787b541f516ce4d01faa8f10f4dfed05c53589f0b299ae78d883bc14cdc346d]\\\\nnvidia-container-cli: device error: unknown device id: GPU-cc63faa0-8033-52ac-dad4-79279b371033\\\\n\\\"\"": unknown rc.docker: Error: failed to start containers: PlexMediaServer kernel: NVRM: RmInitAdapter failed! (0x31:0xffff:834) kernel: NVRM: rm_init_adapter failed for device bearing minor number 0 kernel: NVRM: RmInitAdapter failed! (0x31:0xffff:834) kernel: NVRM: rm_init_adapter failed for device bearing minor number 0 rc.docker: EmbyServer: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/local/bin/nvidia-container-cli --load-kmods --debug=/var/log/nvidia-container-runtime-hook.log --ldcache=/etc/ld.so.cache configure --ldconfig=@/sbin/ldconfig --device=GPU-cc63faa0-8033-52ac-dad4-79279b371033 --compute --compat32 --graphics --utility --video --display --pid=32044 /var/lib/docker/btrfs/subvolumes/6ea3c2c0c5428be0afad1d23b1f459f821776f1dcf923641c55aae2b4acdd312]\\\\nnvidia-container-cli: device error: unknown device id: GPU-cc63faa0-8033-52ac-dad4-79279b371033\\\\n\\\"\"": unknown rc.docker: Error: failed to start containers: EmbyServer and when i went in to my nvidia plugin my p2000 does not show up but it is listed in iommu group 28 under system devices, I'm on ver 6.7
  17. I went into the nvidia plugin and selected the rc8 waited for it to finish installing, rebooted and this is where I was at, but I think i have solved it accidentally i went in to my network setting tried to change the dns server setting and killed the connection swapped the ethernet plug from port 1 to port 2 on the m/b (which i had already tried and I couldn't connect to the server ip) got into the gui tried the nvidia plugin and then started the dockers and everything is running again, i have changed the dns back to what it was originally and now have both ports working again
  18. hey all i just updated from rc5 to rc8 and now none of my docker containers can get in contact with the outside world was going to roll back to see if rc7 worked but the nvidia plugin can't connect to the net, i can access my box from my home network.
  19. sorry if this has been asked i just been looking through all 27 pages, when unraid has a new build available do we just update it through the NVIDIA plugin now and not the tools page or do we have to select the stock unraid of our current build then the NVIDIA build we want?
  20. I was having an issue with the code 43 i tried the hyper-visor on/off didn't make a difference, but i did read on a reddit post to check to see if the unraid OS boot usb was booting in uefi mode in the motherboard bios if so change to legacy mode or equivalent and that fixed my problem, card is all passed through now just had to do a driver reinstall edit: unraid OS boot usb
  21. i too was having a similar problem i tried both of SI1's video's and i can't get either to work, with the easy way (tech powerup rom) i get a code 43 and 800x600 res when i try to pass through the zotac 1060 6gb by it self, i have an old gt250 that i had to blow the cobwebs off and put in the first slot, got the win10 vm to boot with the 1060 passed through with no rom and it worked so i then proceeded to follow the how to dump a rom video all worked as i went along then i pulled the gt250 out started the vm and all i got was a black screen. after a lot of reading on the forum and reddit i found this little gem (change unraid boot usb from uefi to legacy mode) rebooted started win10 vm reinstalled the nvidia drivers and i was away its all working now
  22. i have been having a bit of drama trying to work out why my emails haven't been sent turns out it has been shortening my mail password, i have 3 characters in the password replaced *****�* so every time i tried to make a change to cron job or stats to test mail, it failed the email authentication i went in to the advanced.yaml only to see an upside down question mark replacing the 3 places in the password, not sure if anyone else has had this problem or if its just the combo of characters in my password that cause this, apart from this one hick-up great piece of software.
  23. i have just setup a fresh ryzen box and I am getting the same thing i cant install any docker containers error: /Apps/AddContainer?xmlTemplate=default:/tmp/community.applications/tempFiles/templates-community-apps/EmbyRepository/EmbyServer.xml: missing csrf_token edit: just rolled back to 6.5.0 stable and tried some other containers again (netdata & krusader) and they have installed fine now