emsbas

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by emsbas

  1. I am having an issues with the pools. I setup a APP-Storage Pool to be used for Dockers and VM's. This pool comprises of 3 x Samsung 970 Evo NVME Drives. I want this pool to be a raid 5 or raid 0 not sure yet. But that is not the issue. The issue is when I go to the App-Storage pool drive and then go to balance BTRFS and set it to convert Raid 0 it doesn't change. Is there a way to do it via terminal / ssh or via config? Also Erase Button Doesnt work in the Pool...? Not sure. THank you
  2. Original Post I tried to do as described here and it did not work. As well I was unaware that I was not boosting and I confirmed by checking cpu info in terminal. Not sure I added to the Terminal Modprobe powernow_k8 and it failed to load. I know I am probably doing something wrong. My CPU is a Ryzen 9 3900x Thank you.
  3. It is over the network but I am running 10GB Lan SFP+ the write speed of the Cache is understandable being that I am transferring from a USB2.0 device but the Cache to Array speed is what doesnt make sense.
  4. I call it real world speed test lol. I am transferring data from the Cache Drive to the Array that is where I noticed the speed difference. The file sizes are all the same roughly 1.5 to 5 GB
  5. So short of that it is what it is. Unless some how I can find an 8TB SSD for a parity lol.
  6. I am wondering the 8tb Reds are 5400 RPM's Would I see a gain if I switched my Parity drive to an 8TB Barracuda 7200 RPM? Or even a 8tb 7200RPM SAS?
  7. We would guess it should be better though lol if not I guess ebay 8tb SSD slightly used lol
  8. What about like a compute drive as my parity drive or a WD Black something with some more Balls??? Instead of a WD Red NAS. Maybe even like a 10k or 15k sas drive?
  9. Any other options being that 8tb of SSD's wont be fun... any thing else any one can think up
  10. humm so Raid 0 two 8tb Red's ?? Should give me better performance on my Parity Maybe?
  11. So to the same point if I had 2 parity drives it would be even slower. I wonder if I just try to backup data to another unraid platform and use the slow parity there Then switch this machine to no backup???
  12. Or just Raid 0 two 4tb drives or 4 x tb drives??? Rust Platters?? But doesnt Unraid Dislike that lol
  13. Yea fill up is counter intuitive I know unraid does a parity but the backup to the parity is having the option to only loose 1 drive worth of data that is on the array
  14. FYI Turbo Write increased my performance by about 40% which is great. I wonder if there is a way to optimize it more?
  15. I will give turbo write a try I have not implemented it. My thought process was that if I use Most Free Allocation method I hoped that Unraid would multi thread the writes to the drives which would allow for all of the drives to be use simultaneously. This would mimic the function of striping but not striping by bit but by file.
  16. Yes I am what I am wondering is if I add more drives to the array would that number increase or decrease. What I have happening is that because of this slow rate my cache is filling up and causing me issues.
  17. I was wondering I see that my array is running at 10 - 14 MB/s. Is this normal. I know that when I was running my parity I was getting around 180MB/s on all drives. The time this was taken I was using the Mover Service and moving files from my Cache Pool to the array. I Have Unraid configured to run my Share Allocation method as Most Free instead of the High Water. This is because I wanted the data equally shared on all of my drives. If I were to add more drives would I expect to see this speed go up. How can I increase the transfer rate?
  18. https://www.nvidia.com/object/docker-container.html Maybe.... Maybe....
  19. Makes perfect sense THank you
  20. But I would still like to keep the redundancy of having them configured. As well how would I configure this if for say I wanted to put one NIC on the Web (Via Static IP) and 1 NIC on the LAN
  21. Yes the 10GB has internet access I am currently using it all I have done is disconnect the other two network ports
  22. Hello UnRaid team, I wanted to see if there is a way I can perform some sort of network prioritization. So my goal is-- I have 3 Network cards installed in my UnRaid server and setup in this order ETH0- 1GB Intel Nic ETH1- 1GB Intel Nic ETH2- 10GB Mellanox I would like to make my mellanox card ETH0 or Primary Nic and then fail over to the other nics ??? Any ideas?
  23. Umm call me a noob. I don’t see your sig. Any way you can post it here thank you.
  24. Hey Guys I am trying to pass through my GPU which is a 1030GT. This is the error I am getting. internal error: qemu unexpectedly closed the monitor: 2018-05-20T01:04:29.640529Z qemu-system-x86_64: -device vfio-pci,host=08:00.0,id=hostdev0,bus=pci.0,addr=0x6: vfio error: 0000:08:00.0: failed to setup container for group 19: failed to set iommu for container: Operation not permitted I have already confirmed the GPU is in its own IOMMU Group along with its Audio. I have also added the "vfio_iommu_type1.allow_unsafe_interrupts=1" to my sysconfig and also no help. Any Ideas or am I asking for to much here. IOMMU group 0: [8086:3406] 00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13) IOMMU group 1: [8086:3408] 00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13) IOMMU group 2: [8086:3409] 00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13) IOMMU group 3: [8086:340a] 00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13) IOMMU group 4: [8086:340b] 00:04.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 4 (rev 13) IOMMU group 5: [8086:340c] 00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 13) IOMMU group 6: [8086:340d] 00:06.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 6 (rev 13) IOMMU group 7: [8086:340e] 00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13) IOMMU group 8: [8086:340f] 00:08.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 8 (rev 13) IOMMU group 9: [8086:3410] 00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 13) IOMMU group 10: [8086:3411] 00:0a.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 10 (rev 13) IOMMU group 11: [8086:343a] 00:0d.0 Host bridge: Intel Corporation Device 343a (rev 13) [8086:343b] 00:0d.1 Host bridge: Intel Corporation Device 343b (rev 13) [8086:343c] 00:0d.2 Host bridge: Intel Corporation Device 343c (rev 13) [8086:343d] 00:0d.3 Host bridge: Intel Corporation Device 343d (rev 13) [8086:3418] 00:0d.4 Host bridge: Intel Corporation 7500/5520/5500/X58 Physical Layer Port 0 (rev 13) [8086:3419] 00:0d.5 Host bridge: Intel Corporation 7500/5520/5500 Physical Layer Port 1 (rev 13) [8086:341a] 00:0d.6 Host bridge: Intel Corporation Device 341a (rev 13) IOMMU group 12: [8086:341c] 00:0e.0 Host bridge: Intel Corporation Device 341c (rev 13) [8086:341d] 00:0e.1 Host bridge: Intel Corporation Device 341d (rev 13) [8086:341e] 00:0e.2 Host bridge: Intel Corporation Device 341e (rev 13) [8086:341f] 00:0e.3 Host bridge: Intel Corporation Device 341f (rev 13) [8086:3439] 00:0e.4 Host bridge: Intel Corporation Device 3439 (rev 13) IOMMU group 13: [8086:342e] 00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 13) [8086:3422] 00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13) [8086:3423] 00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13) IOMMU group 14: [8086:3a40] 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1 [8086:3a44] 00:1c.2 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 3 [8086:3a48] 00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5 [103c:3306] 02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 04) [103c:3307] 02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 04) [103c:3300] 02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 01) [14e4:1639] 03:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) [14e4:1639] 03:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) [14e4:1639] 04:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) [14e4:1639] 04:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) IOMMU group 15: [8086:3a34] 00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1 [8086:3a35] 00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2 [8086:3a36] 00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3 [8086:3a39] 00:1d.3 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6 [8086:3a3a] 00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 IOMMU group 16: [8086:244e] 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) [1002:515e] 01:03.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02) IOMMU group 17: [8086:3a18] 00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller IOMMU group 18: [103c:323a] 05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01) IOMMU group 19: [10de:1d01] 08:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1) [10de:0fb8] 08:00.1 Audio device: NVIDIA Corporation GP108 High Definition Audio Controller (rev a1) IOMMU group 20: [1033:0194] 0b:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) IOMMU group 21: [15b3:6750] 0e:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) IOMMU group 22: [8086:2c70] 3e:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02) [8086:2d81] 3e:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02) IOMMU group 23: [8086:2d90] 3e:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02) [8086:2d91] 3e:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02) [8086:2d92] 3e:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02) [8086:2d93] 3e:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02) [8086:2d94] 3e:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02) [8086:2d95] 3e:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02) IOMMU group 24: [8086:2d98] 3e:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02) [8086:2d99] 3e:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02) [8086:2d9a] 3e:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02) [8086:2d9c] 3e:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02) IOMMU group 25: [8086:2da0] 3e:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02) [8086:2da1] 3e:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02) [8086:2da2] 3e:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02) [8086:2da3] 3e:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02) IOMMU group 26: [8086:2da8] 3e:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02) [8086:2da9] 3e:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02) [8086:2daa] 3e:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02) [8086:2dab] 3e:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02) IOMMU group 27: [8086:2db0] 3e:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02) [8086:2db1] 3e:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02) [8086:2db2] 3e:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02) [8086:2db3] 3e:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02) IOMMU group 28: [8086:2c70] 3f:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02) [8086:2d81] 3f:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02) IOMMU group 29: [8086:2d90] 3f:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02) [8086:2d91] 3f:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02) [8086:2d92] 3f:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02) [8086:2d93] 3f:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02) [8086:2d94] 3f:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02) [8086:2d95] 3f:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02) IOMMU group 30: [8086:2d98] 3f:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02) [8086:2d99] 3f:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02) [8086:2d9a] 3f:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02) [8086:2d9c] 3f:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02) IOMMU group 31: [8086:2da0] 3f:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02) [8086:2da1] 3f:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02) [8086:2da2] 3f:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02) [8086:2da3] 3f:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02) IOMMU group 32: [8086:2da8] 3f:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02) [8086:2da9] 3f:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02) [8086:2daa] 3f:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02) [8086:2dab] 3f:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02) IOMMU group 33: [8086:2db0] 3f:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02) [8086:2db1] 3f:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02) [8086:2db2] 3f:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02) [8086:2db3] 3f:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)