Jessie

Members
  • Posts

    267
  • Joined

  • Last visited

Everything posted by Jessie

  1. I'll answer my own question in case someone else encounters the same problem. Recently, I built a machine on a Gigabyte X570 gaming x platform and a Ryzen 7 3770x. On this board, you can change a setting in BIOS and the IOMMU groups pretty well work in unraid for usb/gpu passthrough without the need to set ACS override in unraid. It worked flawlessly. Then I tried a Gigabyte B550 Gaming X motherboard with the same processor and tried to pass through a Gigabyte GT130 GPU (No fans) and a gpu bios dump from tech powerup. Black screen. I've built a few machines using this GPU and BIOS dump without problems on series 2 ryzen and intel boards. So I blamed the B550 board. Then I built a machine on an x570 board. Black screen again. So I dumped the BIOS from the actual card as per Spaceinvaders tutorial and it worked. I then added another GT130 card and was able to create a second Windows vm complete with keyboards/ mice and usb passthrough. I am assuming the B550 failed for the same reason. I will do the experiment in the near future. Not really sure why it didn't work. on the series 3 hardware but did on the second gen servers. Maybe the firmware changed in the GT130 card.
  2. Did you ever resolve this? The only thing I can think of is go back to basics. Turn off all dockers and excess vm's and see if it comes good. Then introduce the dockers/vms one by one and observe which one makes it fall over.
  3. Are the graphic cards identical? In the old days, the graphic cards had to be different.
  4. Has anyone built unraid on a x570 platform with 2 gpu's 2 vms etc running simultaneously?
  5. Has anyone built an unraid system on this board? Can't pass through windows to a gt1030 graphics card. Black screen. I'll answer my own question to a point. Couldn't get it to pass through using the Rom bias. Went back to "Old School" and used a second graphics card. That worked ok. Not happy though. That means that if I wanted to pass through a second VM, no slots left. Unless they fix it in bias, I'd say this is my last b550 build. X570 works ok. So the outstanding question is, has anyone passed through to the GPU using a single card and rom bias? I'm yet to do the experiment but I just built a system on a gigabyte x570 gaming x board and also had trouble with a gt1030 board. I have previously used the same bios in a gen2 ryzen and it worked fine. So on the x570 I extracted the bios from the card and it worked. Possibly this is the same issue on the B550 board. Will record here if it works.
  6. I'll throw in my 2 bob. If you primarily want to store a lot of data, I think its a good option for a small business. You can expand the array by simply adding a disk. On other (non unraid) disk arrays, you can't expand the arrays without a complete rebuild, else the only other option is to replace the original disks with larger ones. You can run a second parity disk. I normally don't worry about this on systems with say 6-8 drives, but it would give protection during a build in the event of a second disk failure. All of the data is intact on the same disk. So in the event of a catastrophic failure, the data can be retrieved intact from an individual disk. By comparison, more than 1 disk failure on a striped array and you lose the lot. If you do lose a disk, like other raid arrays, the system will carry on deriving the data from the failed drive courtesy of the parity drive. If the parity drive fails, all of your data is available intact in the original disk but not protected until parity is replaced. (unless you run 2 parity's) Unraid is hardware agnostic. If the box blows up, you can move the drives and the usb to another "different" box and carry on. My experience with those Nas type devices is if they fail, you can't just pull the drives out and retrieve the data. High risk of losing the lot even if the drives are intact. Maintenance requirements are low. Unraid just works. (Been using since version 4.0) Access control to shares is very easily maintained. Replication would be nice. It can be achieved by the utilization of sync dockers. A good platform for a nextcloud server. Nextcloud supports versioning so a good one to mix in with the system. Easily maintained through modem to modem ipsec links or openvpn. But that is a separate topic. It is also a good platform for vm's. I'm looking at replacing SBS servers. Looking at Nethserver as a vm on unraid to give active directory functionality to 8-10 workstations. Haven't rolled that out yet.
  7. Point taken, but at the end of the day it is really about getting nextcloud working. The fault could be in either. My gut feeling is the problem will lie in the port 443 to 1443 translation.
  8. Had a problem like that a long time ago. Use notepad++ and have a look in appdata/letsencrypt/nginx/proxy.conf At the top there should be a line # client_max_body_size 2048m; If it doesn't read 2048m, make it 2048m. or as above block that line out with a # I've found max filesize will be 2gb when you drag it to the web interface, but unlimited if the client does the transfer.
  9. In unraid, I set Use SSL/TLS: to no. (In settings/Management access) This means I access unraid via the ip address and it stays fully enclosed within my network. To get to it from outside, I use ipsec tunnels. This frees up port 443 for letsencrypt. So I pass port 443 through letsencrypt intact. It might be interesting to see if that works. It would prove beyond doubt if port 80 is blocked. I still redirect port 80 to another port. eg 180 If port 80 is blocked, it is possible you might be able to unblock it by logging into you ISP user area. In Australia IInet block it but you can optionally turn blocking off. If using dns verification port 80 is irrelevant. If the log said server ready, it sounds like it generated the certificate. It is important not to miss that step in spaceinvaders tutorial about "proxynet". After you create it, you need to point Mariadb, nextcloud, letsencrypt and onlyoffice to it in your docker settings rather than "Bridge". The letsencrypt proxy will allow you to run multiple servers through the same isp IP address. eg Multiple Nextcloud instances, collabora, onlyoffice and any other docker or vm which require port 443. Otherwise you would require multiple ISP addresses.
  10. If you use the http method, letsencrypt requires port 80 for validation when it generates the certificates. 443 is used for communication, but no port 80 = no certificate.
  11. To me, this looks ok. I've never used dns validation though. Is there any reason you can't use http? What do the letsencrypt logs look like. Were the certificates generated? If they were and it still doesn't work then blow the letsencrypt docker away. Remove files and reinstall a fresh one. Don't know why but it works for me sometimes. Dont forget to forward ports 443 and 80 in the router. (443ext to 1443 int) and (80ext to 180 int)
  12. For What its worth, I have the 2 lines below underneath 'overwriteprotocol' => 'https', 'dbtype' => 'mysql', 'version' => '19.0.1.1',
  13. I'm in the same boat looking for a solution. I've run sbs servers for a long time. They are now redundant. Can't find any dockers that i could get to work, so looking at running Nethserver as a VM on unraid. It has an imap mail server that looks promising. Any thoughts??
  14. Has anyone had success with the TALK app? If so How? I can get it to run on the local network. Or at least I could a couple of nextcloud version ago. I don't think I can get it to run at all now. Also how do you successfully get talk to work across a WAN connection?
  15. So on version 19, if you login, then close the window, then login again, you get this!!!! Surely this is not by design. The only way to get back to normal is to click on files and the normal screen comes up. If you log out properly, it behaves normally.
  16. I just reinstalled the android app on my phone. Now it wont download my folders. Anyone experienced this? Says "No Files here" upload some content or sync with your device. Server is running 18.06
  17. You're welcome. I found this regarding connecting 10gb via bridge. Not sure whether relevant to your situation.
  18. IDE only allowed 2 drives per cable. Master and a slave. so you would need a motherboard with 3 ide controllers. That would go back to the xp era or maybe early vista. The processors would be 32 bit, so current unraid wouldn't run. Maybe Unraid 4 or lower might go. Personally, if you wanted to use those drives, try building a windows 95 machine. (for the museum) It's harder than you think. I built one up last year to get data from an old tape drive which would only work on win95. Found an old 386 chassis, put a gig of ram and a 200gb hard disk. No go. Had to reduce ram to under 500mb and use a drive smaller than 32gb. Once I had the data, no usb support, so I had to find a later xp machine that supported ide and copied off the hard disk from that. Your drives might work on win 98 or Windows NT (Remember NT?) Not sure about the 500. It might be too large.
  19. If you are relying on the parity drive to emulate the disk and you lose another disk, data gone. If you are using 2 parity disks, you can afford to lose 2 disks. After that, data loss. In the end for system integrity, you really want to repair the array otherwise, they are running as single disks. To further explain. On a 1 parity disk system, you can only afford to lose one drive at a time. So if parity fails, you replace and the array rebuilds it. If one of the array drives fails, the parity calculates the contents of the drive using the info stored on parity plus the checksum of the data on the other drives. So if you lose 2 data drives, parity doesn't know how to calculate the contents of the first failed drive because it cant obtain a checksum from the second failed drive. There is a good article on this on the website. The plus side to this is the data is wholly intact on the remaining unfailed drives so you would only lose data on the failed drives. By comparison, raid arrays which use striping, more than 1 failed drive there and you lose the lot. (I keep adding to this) One reason to have 2 parity disks is because if you have a system with a lot of disks, there is a possibility of a drive failure during a rebuild of a failed disk. Mainly because the drives are going to be hammered for 6 hours or so during the rebuild. It is possible some of those disks spend most of their time powered down, so its pending failure might not be immediately noticed. Personally I don't worry about a second parity unless there are more than say 12 drives in the system. I usually schedule a parity check once a month.
  20. Sounds like the 10g adapter is not releasing itself when you shut down the vm. What if you made the 10g adapter the unraid adapter, then bridge that through to the vm? A link which might be a similar scenareo.
  21. On bootup, do a memory test. Any errors, change memory.
  22. If a drive fails, you lose your array protection. I reckon I would weed out the sus drives. Move data onto the good drives and reconfigure the array so it is reliable. If the motherboard can handle iommu passthrough, put a windows vm onto it and pass the graphics onto a gpu. Then you are running a system with your data protected. Windows will run faster on Ssd's . A couple of ssds for cache and run windows on that. There should be extra space on the cache for the rest of unraid. eg use a pair of 500gb ssds for the array then use 200gb for a windows vm and use a share on the array for bulk data. The rest of the ssd will handle data transfer to the array. I don't think hard drive types matter that much, although standard drives might run hotter, so cooling matters.
  23. To answer my own question. (and if anyone wants to know) Yes, it worked very well. With the exception of passing through the onboard sound controller. I googled that issue and it appears that a future kernel update will rectify the problem. Someone rebuilt a 6.8.1 kernel and fixed it, but I won't be going down that track. The hardware was as mentioned above plus 2 x 4tb western digital reds for the array. I chose Gigabyte for the motherboard because they reckon the chipset fans use high quality bearings. Not keen on chipset fans but is compulsary on X570 chipsets. I'm hoping the fan will outlast the motherboard. I noticed that the fan spends a lot of time "off", however it is winter here and temps will go to 46 degrees in summer, so I expect it will be busier then. I put it into an Antec P101 silent case because it has 8 conventional HD bays plus 2 x 2.5"ssds bays. Enough room to fully expand the system. Power supply is a coolermaster Masterwatt 650. I have used coolermaster silencio 550 cases on other builds but that case is discontinued. I felt that the new series of coolermaster cases do not suit unraid. Not a lot of 3.5 slots, and they are spread randomly around the case. (Messy). I'm also not a fan of mounting mechanical drives vertically as is required in the new coolermaster cases. So the system as is, can handle 6 sata drives = 20tb of protected storage on the array using 4tb drives plus 2 x m.2 drives for the cache. Graphics is a gigabyte gtx1650 passed through to the windows vm, with the addition of the bios file. The motherboard has a feature in bios which optimises the IOMMU groups for VM working. This meant it was not necessary to turn on ACS in unraid. I pasted the IOMMU grouping below for before and after. I passed the GPU and its sound adapter through via the normal method and passed most of the USB controllers minus the one with the unraid adapter using the VFIO-PCI Config plugin. This allowed hot plugging of all 4 usb jacks on front of case plus a couple on the back for windows. It will initially act as a windows workstation with protected storage and nextcloud server/letsencrypt/collabora dockers running in the background. It might end up running a pfsense vm and openvpn, either running from pfsense or maybe the docker for remote access. Possibly a media server docker to stream home movies to tv devices. To automate things a bit, I set bios to turn the machine on in the morning and the Dynamix S3 Sleep plugin to shut it down at night. This will allow access to the cloud server during the day and save a bit of power overnight. Gigabyte X570 Gaming X IOMMU groups Standard (before) PCI Devices and IOMMU Groups IOMMU group 0: [1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:57ad] 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad [1022:57a3] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3 [1022:57a4] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [1022:57a4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [1022:57a4] 02:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16) [1022:1485] 04:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:149c] 04:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] 04:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:7901] 05:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:7901] 06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 1: [1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 2: [1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [10de:1f82] 07:00.0 VGA compatible controller: NVIDIA Corporation TU117 [GeForce GTX 1650] (rev a1) [10de:10fa] 07:00.1 Audio device: NVIDIA Corporation Device 10fa (rev a1) IOMMU group 3: [1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 4: [1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 5: [1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 6: [1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 7: [1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 8: [1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 9: [1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 10: [1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 11: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 12: [1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 IOMMU group 13: [1022:148a] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 14: [1022:1485] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 15: [1022:1486] 09:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP IOMMU group 16: [1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 17: [1022:1487] 09:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller IOMMU group 18: [1022:7901] 0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 19: [1022:7901] 0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) CPU Thread Pairings Pair 1: cpu 0 / cpu 8 Pair 2: cpu 1 / cpu 9 Pair 3: cpu 2 / cpu 10 Pair 4: cpu 3 / cpu 11 Pair 5: cpu 4 / cpu 12 Pair 6: cpu 5 / cpu 13 Pair 7: cpu 6 / cpu 14 Pair 8: cpu 7 / cpu 15 USB Devices Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 2516:0051 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 003 Device 003: ID 048d:8297 Integrated Technology Express, Inc. Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 0781:5571 SanDisk Corp. Cruzer Fit Bus 005 Device 003: ID 0557:7000 ATEN International Co., Ltd Hub Bus 005 Device 004: ID 0557:2213 ATEN International Co., Ltd CS682 2-Port USB 2.0 DVI KVM Switch Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub SCSI Devices [0:0:0:0] disk SanDisk' Cruzer Fit 1.00 /dev/sda 8.00GB [5:0:0:0] disk ATA Samsung SSD 860 3B6Q /dev/sdb 500GB [7:0:0:0] disk ATA WDC WD40EFAX-68J 0A82 /dev/sdc 4.00TB [11:0:0:0] disk ATA WDC WD40EFAX-68J 0A82 /dev/sdd 4.00TB [18:0:0:0] disk ATA Samsung SSD 860 3B6Q /dev/sde 500GB ACS enabled on Motherboard (after) PCI Devices and IOMMU Groups IOMMU group 0: [1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 1: [1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 2: [1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 3: [1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 4: [1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 5: [1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 6: [1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 7: [1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 8: [1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 9: [1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 10: [1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 11: [1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 12: [1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 13: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 14: [1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 IOMMU group 15: [1022:57ad] 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad IOMMU group 16: [1022:57a3] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3 IOMMU group 17: [1022:57a4] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [1022:1485] 04:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:149c] 04:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] 04:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 18: [1022:57a4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [1022:7901] 05:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 19: [1022:57a4] 02:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4 [1022:7901] 06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 20: [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16) IOMMU group 21: [10de:1f82] 07:00.0 VGA compatible controller: NVIDIA Corporation TU117 [GeForce GTX 1650] (rev a1) [10de:10fa] 07:00.1 Audio device: NVIDIA Corporation Device 10fa (rev a1) IOMMU group 22: [1022:148a] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 23: [1022:1485] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 24: [1022:1486] 09:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP IOMMU group 25: [1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 26: [1022:1487] 09:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller IOMMU group 27: [1022:7901] 0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 28: [1022:7901] 0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) CPU Thread Pairings Pair 1: cpu 0 / cpu 8 Pair 2: cpu 1 / cpu 9 Pair 3: cpu 2 / cpu 10 Pair 4: cpu 3 / cpu 11 Pair 5: cpu 4 / cpu 12 Pair 6: cpu 5 / cpu 13 Pair 7: cpu 6 / cpu 14 Pair 8: cpu 7 / cpu 15 USB Devices Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 2516:0051 Bus 001 Device 003: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 048d:8297 Integrated Technology Express, Inc. Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 0781:5571 SanDisk Corp. Cruzer Fit Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub SCSI Devices [0:0:0:0] disk SanDisk' Cruzer Fit 1.00 /dev/sda 8.00GB [5:0:0:0] disk ATA Samsung SSD 860 3B6Q /dev/sdb 500GB [7:0:0:0] disk ATA WDC WD40EFAX-68J 0A82 /dev/sdc 4.00TB [11:0:0:0] disk ATA WDC WD40EFAX-68J 0A82 /dev/sdd 4.00TB [18:0:0:0] disk ATA Samsung SSD 860 3B6Q /dev/sde 500GB
  24. Thanks for that suggestion. Just checked the manual again. It will support m.2 sata and pci4 x4/x2. There is no mention of disabled sata ports so I presume they are operating independently. I ordered the parts yesterday, so will find out the hard way if i'm wrong. The other thing I am a little concerned about is hardware pass through to the graphics card. This will be my first Ryzen 3 series build. I have built other machines in Ryzen 2 platforms using X370 motherboards and passthrough would fail if bios supporting Ryzen 3 was loaded. Otherwise they were very stable.
  25. I am planning a system on a Ryzen 3700x processor with a Gigabyte X570 gaming X motherboard and 32gb ram. The question is, can I use the 2 x m.2 slots for Samsung evo860 500gb m.2 ssd's for the cache? Or should I play safe and use conventional SSD's on the sata ports. The board only has 6 sata connectors and I would like to reserve them for the array.