Decto

Members
  • Posts

    262
  • Joined

  • Last visited

Everything posted by Decto

  1. CSM - Compatibility Support Module (Bios Setting) - enabled, may force the board to boot from from a Chipset GPU. The x4 is connected to the PCH (chipset). PCI-E 2.0 x 4 is enough for 8 HDD, PCI-E 3.0 x4 is enough for 16 HDD with no loss in speed. Could you use a x4 to x8 / x16 riser for the ATA card with the GT710 in the second slot. Assuming your ATA card is half height.
  2. Reading this thread I don't think you yet established how this was caused. SAS data cable voltages are ~1V so it would be very surprising if this caused damage during connection. In my experience, the most probable cause is that that the polarity on the 12V PCI-E was reversed. Until you identify how this was caused you are at risk of repeating the problem. One suspect is the 8pin motherboard cable being used in error (rather than PCI-E) since these are wired with opposite polarity. It should be possible to connect due to keying, but cheap moldings may allow a plug to be force connected. With the PCI-E cable disconnected from the cages and the PSU powered on (PC running) Set multimeter set to DC volatage Black probe to a black cable Red probe (from meter) to yellow cable. You should get a reading on +12V, check it is not - 12V As the drive does not seem to have a short circuit on the input voltage (from the meter readings).... I see you confirmed that 1TB drives spin up and that you have used the cages before but I didn't see you confirm that the 'damaged' drives won't spin up when connected to Molex (e.g. with that cable) The reason I check this is that the backplane would not generate +3.3V so you would have no issues using it, however a SATA power cable from your motherboard will have the 3.3V and stop the drive from powering up when powered directly. The cable above looks odd as I can't see the pins for the red and back on the front edge, but you should definitely check by powering up the disks via Molex before anything more distructive. For now just see if it spins up, no data connection. Good luck.
  3. How are the fans connected? Your motherboard supports PWM fan control if enabled in the BIOS and the fans are connected the the fan headers. There is only 1 header for CPU and 1 header for Case fans. If the fans are connected directly the PSU, you either need to buy a fan controller, a low noise adapter (basically a resistor that connects in line with the cable), a fan with a lower speed or switch to PWM (4pin) fans. I've used ARCTIC P14 PWM PST (120mm) and ARCTIC P12 PWM PST (120mm) fans as these allow you to piggyback a connector onto a single header.
  4. Hi, Performance will be good for Plex/Emby though unless size is really an issue then I'd go for at least an mATX build as you will have more expansion options later. e.g. node 804 ... other brand cases are available Space disappears quicky when you start adding media. Also for HDD, the parity size sets the maximum for the other drives. With only 4 sata connectors I'd be looking at least a 12GB parity and second drive to start then add the extra drives as required. The cost per TB is fairly similar, you just have to soak up the cost of the larger parity first off. While you can add a sata card into the PCI-E slot, going larger drives initially pushes that need out quite a way. If you did go for a build with more options(mATX), I'd still start with at least 8TB drives. There is no throughput advantage to more drives and you can easily add additional drives to the array unlike classic RAID systems. For power consuption, the Intel CPU will idle on a few watts, with drives spun down and a decent quality and not oversize PSU (as you have) I'd be expecting less that 30W idle for the full system. Low power quickly becomes a zero sum game where the power savings dont cover the additional hardware cost in it's lifetime.
  5. Hi and welcome. Some thoughts. You seem to set on a new motherboard, however a 2 Sata port ASmedia card is only ~$10 while a used SAS controller with cables for 8 additional drives would be around $50 and work fine in a x4 slot. That would get you going at a minimal cost and what you learn from the setup will help you pick an upgrade. Due to how unraid works, you basically swap the drives and the USB and you're up and running again so hardware changes aren't usually a problem as long as the new hardware is supported. Asus WS Pro X570 may be an option as it has 2 x PCI-E 4.0 x8 and 1 x PCI-E 3.0 x8 which gives more expansion. If a main use is Plex then you ideally need a Nvidia GPU to transcode as needed with your AMD CPU. You can use the CPU cores but if it comes to 4k transcode you will soon eat into the CPU cores. Even a X570 board can be quite limited in expansion if you want multiple GPU's and a SAS controller. Often an intel CPU with built in iGPU is a good option as it can do all the transcodes which leaves plenty of free expansion option. A B365 board with a Quad core or better chip would be a decent starter system for Unraid and a good number of dockers etc, though you need to avoid the 'F' skus with no GPU. What would you use the Ryzen 9 for? More cores is nice but you soon run out of expansion slots for VM's etc in which case different hardware might be more suited.
  6. Guess which one is the air drive, thats correct... The EDAZ. Second drive down is right next to it so runs a little hotter. The coolest drive is in a seperate enclosure with better cooling. Current rebuilding (replacement) the bottom drive so all drives have been active for hours.
  7. The marvel controllers are known to have some issues so best avoided for the array. You don't want the cache dropping offline either, though it may be OK to use Marvel for unassigned devices when you are copying data as dropping a disk won't put the array offline, trigger rebuilds etc. Transfer into the array is mostly limited to the speed of the slowest disk, there is also a parity calculation overhead. Transfer speed isn't usually more than Gbit speeds even if you add the disk as unassigned, though it will be as fast as possible and you aren't tying up your local machine for 100 hours! You can always setup without the the 10G cards, load the data by unassigned and then if you find network speed is an issue, add in the 10G card later. A support card will just start working on boot if it has the network cable in. One common tip is not to enable parity until you have copied all the initial data across. As it's a copy, you have a backup. This will give you full disk speed early on and the slower speed later. It depends what you are doing with a VM. You don't need a graphics card at all if you plan to access via RDP or VNC etc, a display will be emulated. This falls down if you try to play video in the VM, but for configuration, web browsing etc. it's fine especially over a LAN. If you want to use the VM as a second PC with a monitor, keyboard and mouse you then need a discrete GPU, though you may be able to use the iGPU. The hybrid is 'streaming', this uses clients such as moonlight, parsec or steam to encode the desktop view as a 60fps+ video stream while also connecting your keyboard and mouse back to the machine. This would allow you to play a game, watch video etc on the VM remotely. The GTX6xx are the earliest GPU's that supported for this, however later GPU's have better encoders so will give you better stream quality etc. You can game to an extent on the quadro, and the P2000 is probably similar to the GTX680 as it's based on the GTX1060, however if you have linked this to Nvidia Plex edition, then it's isn't available for passthrough to the VM hence the need for the second GPU. If your new PC is already running, jump in and give it a go, the 30 day trial can be extended twice by 15 days, and if you are still experimenting, you just need another $5 flash drive to start over, however all the files on the array will still be there and the array can be mounted in on a new install so the time consuming copy isn't wasted. Good luck
  8. You have an impressive collection of random flash drives. A quick google of your board and USB boot indicates it was a bit of a challenge. A few of the things I read were around plugging a drive in part way through the boot, e.g. just after the PCI device detection, flashing an ealier BIOS version or even a mod to the EFI files. As you had an extremely slow boot on one drive and another drive that hangs a Grub, it's likely a board / bios issue rather than an unraid issue. Note you can enable EFI in the unraid flash tool by selecting customise e.g. Almost the same board: https://forums.tomshardware.com/threads/ga-ep45-ud3r-bootable-usb-hangs-pc-booting.2450967/ and https://www.ixsystems.com/community/threads/install-freenas-9-3-in-usb-drive-that-wouldnt-boot.30203/ Good luck on your quest
  9. A couple of thoughts. 6 of the 8 Sata ports on the motherboard are good, I wouldn't use the 2 Marvel based ones. With the 8 ports on the SAS card you should be good for hard drives for a while, at which point you may be thinking of hardware upgrades. Asmedia 2 port controlers also work fine in a PCI-E x1 so you have 14-16 drives to get you started. The SAS card would be fine with 8 drives in a PCI-E 2.0 x4 slot 10G isn't that useful in a steaming / low user count space other than for quick flash to flash transfers. With how unraid is designed, the read / write speed is limited to that of a single disk, slower for write due to parity calcs. Typically you're capped around Gbit speeds during writes anyway with the parity calcs. Teamed Gbit is usually fine for read which runs at the normal disk speed. A dual Intel lan on a PCI-E x 1 slot would be an option. If slots are at a premium, I'd comprimise network transfer rate since most of the traffic/download etc. is internal to the server. You can always mount drives using 'unassigned devices' outside of the array via SATA or USB for much quicker transfer. The encoder in the IGPU of old chips isn't great as it doesn't support many of the modern formats. If you pass through the P2000 to a Plex, it can serve double duty and unraid display and plex encode / decode (if you have plex pass). Same issue with reverting to the GTX680, you will need to be picky about formats or do quite a bit of offline CPU transcoding for your library. You don't need a second GPU for a VM unless you either need a display output or intend to 'stream' a game from the server. With the older, core limited CPU streaming options will be fairly limited. With the board you have, IO options are limited. Either now or the future you could sell of the quadro and cpu/mainboard/memory and replace with a B365 board with quadcore or 6 core CPU... or whatever modern equivalent. The modern IGPU would be fine for Plex etc and you would have reasonable IO for expansion. The B365 boards expose more PCI-E lanes. Probably only a bit of beer money in it. Good luck
  10. If that is sufficient performance for the VM, then yes use the 1 CCX, otherwise use 2 or 3 complete CCX. Ideally you want to minimise the number of CCX for the number of cores you want to pass. However the perfomance hit shouldn't be as significant on Zen 2 as on the early threadrippers due to the design changes made where all CCX now connect equally to the IO die. Using cores across CCX shouldn't cause any crashes but may result in stutter or reduced FPS etc.
  11. While the 3900X has 2 dies, it has 3 or 4 CCX. Each CCX communicates with IO die separately so ideally you want to avoid lots of CCX. I assume you would be able to confim in Ryzen Master. From the image above, it appears you have 4 CCX of 3C6T. Die 1 CCX 1 0-2,12-14 Die 1 CCX 2 3-5,15-17 Die 2 CCX 1 6-8, 18-20 Die 2 CCX 2 9-11, 21-23 Also make sure to install the tips and tweaks plug in and set performance mode for gaming
  12. Intel controllers are often grouped as you see. Group 18 is the xHCI controller (USB 3.0), this is backwards compatible with the USB 2.0 and will be linked to the 4+ ports Group 21 and 26 are eHCI controllers (USB 2.0) usually 2 ports per controller. Usually the eHCI and xHCI are all connected to the same group of ports. If you don't need USB 3.0, disabling USB 3.0 (if possible) may allow you to pass through the two USB 2.0 controllers individually. You can experiment with enable / disable of xHCI, eHCI handoff in the BIOS. Motherboards behave differently so there is no magic pre solution. That may remove the requirement for one of the cards. It looks like the PCI card is driven from the chipset by a PCI-E to PCI controller, the card then reverses that with a PCI to PCI-E controller which then interfaces with a PCI-E USB 3.0 Controller. An old native PCI to USB 2.0 controller may be a better bet if you can find one to avoid the additional bus conversion.
  13. See Unraid Nvida config here. 1 GPU should be fine to boot unraid and service transcoding in Plex. Separate GPU for VM. For USB passthrough you may need to get a PCI-E card based on the Fresco Logic FL1100 chipset depending on how the intergrated USB is configured on the board. My Plex just sits on my cache drive, a standard SSD. I haven't noticed any performance issues. You should be able to pass through a NVME drive to Windows for best performance. Water cooling seems like an unusual choice, none of the parts require significant cooling and you will need good airflow through the case to keep the hard drives cool anyway so there will be plenty of air to support an air cooler on the CPU and GPU's.
  14. If you take a backup of your flash drive, you can always restore using the USB creation tool and the backup so changes can be undone easily.
  15. USB pass through is very fickle. As above are the other devices in IOMMU 32 part of the card, in which case pass them all through. If not it may be worth moving the Unraid USB drive to IOMMU 32 and then pass through 33 to the VM as it seems to be on it's own. Unfortunately this generation of chipset doesn't implement IOMMU very well so results can vary and the IOMMU seperation shown may not be reality. I have a pair of Asmedia USB controllers with seperate physical chips on the manboard connected by PCI-E, however I can only pass them through together and even then they are not reliable, working fine with HDD but not keyboard or mouse. The Intel integrated controller is fine to pass through.
  16. For the array, SSD is not recommended. Trim is not supported and the main risk is that the drives do some 'data management' which invalidates the parity. Ideally the array will be spinning disks, this is usually fast enough to max out a gigabit connection so not an issue. If you use the latest beta you can create a second drive pool, in addition to the cache as SSD so that may meet your needs. Unraid is not backup.... it is for availability and convenience. You need a backup for anything you can't replace.
  17. Hi, Little to go on in your post as the XML doesn't actually contain much in the way of useful data. It may be that you don't have VT-d or AMD equivalent enabled to properly pass through hardware. Else your IOMMU groups may not be configured correctly. I'd start with these
  18. A UPS is recommned for unraid so that in the event of an outage you get a clean shut down. My APS brand UPS was ~ £60, $75. You only need a few minutes of run time for an automatic clean shut down. Most will give you a power draw. Just make sure it supports the APC plug in (via USB) Alternatively you can pick up a power meter than plugs into the wall fairly cheap £10, $12 which can be usefull to see what uses the most power. Off topic, but my old fridge freezer is a power hog using nearly 3x the power combined for a later fridge and freezer with more space. It's on the replacement list now! APC plugin in Unraid Note, this is at the wall power so at 90% efficiency means real consumption of 90W As for your power supply, The calculator recommends 590W which has a fair overhead to keep you in the efficiency sweetspot. Realistically your PC will be peaking around 350-400W. I'm only at ~400W with a 3700X and Vega 64 so I'd leave it as is for now. If you add another GPU then I'd look at an upgrade.
  19. Hi and welcome. 1) To an extent 8 drives is marketing. If you have enough drives spinning with the same workload you could get a resonance which impacts performance / error rate which enterprise drives can better mitigate however it really depends how they are installed (often with damping) and accessed etc. Personally I ignore any such 'advice' and with Unraid you can have as many devices connected as you like up to your licence limit. You don't need any raid specific features with Unraid, desktop HDD are fine. It is common to shuck (remove) drives from external enclosues as this can be quite a bit cheaper... though may impact warranty depending on region 2) See 1 3) Seems all WD drives are actually 7200rpm despite claiming 5400rpm class performance. Note that all larger drives tend to run warm and need decent cooling, You need airflow across the drives so make sure you case is suitable. WD > 8TB, HGST >8TB, Segate Ironwolf, Iron Wolf Pro, Toshiba 4) Onboard SATA Is fine, a SSD cache drive is recommeded so that leaves 4 ports for storage after parity. An Asmedia 2 port PCI x1 card is fine, if you have a X4 then a card based on JMB585 chip will give you 5 slots but expensive so a SAS controller flashed to IT would be better for 8 ports. Early on though, I'd just get the Asmedia for $10 and replace some of the smaller drives something equal to parity. Use them for backup, cache or unasigned device if you want to. 5) Generally Unraid is hardware agnostic, worst if the Sata controller inpacts naming, you would need to do a new config but retain all the data. With single parity you just need to be really sure which the parity drive is. A post in the general forum would get you guidance for this. Where data is concened better to ask. Given your existing disks, 8TB seems like a good parity size, there is a partity swap upgrade proceedure you can run later if you outgrow 8TB drives. If you are putting critical files on the server make sure you have a backup strategy. Raid / Unraid is not backup... it is availability and convenience
  20. Hi, the manual suggests you should be able to select USB HDD on the advanced tab. My Gigabyte Z68 has been retired for a while, so I don't remember if there was an option to select a specific USB device to boot from, or if it was just USB HDD and then let it boot. It may you can select USB HDD then make a selection of the drive in Hard Disk Boot Priority. I'd stick to the back plate ports for now, disable the full screen logo, and the second and third boot devices then see what is reported at POST. Gigabyte also used to use CTRL+F1 for some more advances settings, could be something hidden. USB boot on older devices can be fickle, not all USB flash drives support boot and larger drives are often not properly supported especially on older hardware. E.g. 16GB USB drives boot fine in all my systems, (back to Z68) but 32GB from Samsung, Kingston and Sandisk all fail to boot regardless of OS. In a much later board, I can only get 32GB drives to boot in EFI, they won't boot in classic BIOS while 16GB versions of the same drives boot fine as EFI or Classic. Ideally test with a 1-16GB USB 2.0 drive from Kingston or Sandisk.
  21. Why not create a second VM with the alternate hardware that points to the same disk / vdisk. As long as you shut down one before starting the other, it would work and windows would just reconfigure. I don't know of a way to hot swap PCI-E devices, you wouldn't do that with bare metal install
  22. Some motherboard will not boot without GPU. You can use any PCI-E GPU. I have a few GT710's about that I picked up for less thank $15 on ebay, GT210 and others can be had for $10 or less. Any cheap fanless card is fine... idle power consumption around 5W, you never know when you need to access the BIOS, or see a boot screen error. If the board will boot without gpu then you don't need one at all, unless you need to get into the BIOS. Just use the Customise and Static IP options in the USB creation tool to assign a fixed IP.
  23. Hi, Note that VM's are not interchangeable accross BIOS types, you may get away with a machine switch FX440 to Q35. For this type of trouble shooting, it is often best to create a new dummy VM of each type, no drives and just see if you can pass through to a BIOS type screen. Swapping between types just creates more likely hood of a fail. Seabios should take you to a classic BIOS type screen with a 'no boot device' wheras EFI should drop you into an EFI shell. If you don't get that far then something is not configured. Check that IOMMU and HVM enabled There can be clashes between EFI bios on motherboard and on the GPU so it can be worth flipping CSM - compatability support module in the BIOS if available to test both settings. Looks to me like 2 of the 4 GPU devices are on the wrong bus... though it may not be the cause. This assumes it's a 1660 or 2xxx card with the built in USB. This is the GPU section, basically 4 source devices that are passed through. <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/isos/RTX2070Super.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x2'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> Each source device is on bus 0x42 and has the function 0x0, 0x1, 0x2 or 0x3 E.g. <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x3'/> </source> For the pass through devices, Bus 0x05 function Ox0 and Ox1 look ok, however the additional devices are passed through to a separate bus. <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> and <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> When they should be <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/> and <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/> Note due to the way VM editing works, every time you make a VM change in the graphical editor you will need to reapply the changes.
  24. Have a look here, poster seems to have the same board working, it may help with the USB. Integrated USB can be a challenge to pass through fully, often people fall back to an add in card. Edit Also seems to be an AMD bug with passthrough
  25. I'm not sure why I read this as AMD earlier. Anyhow, I note you say that both GPU's and some PCI-E bus slots are in IOMMU group 1. Devices really need to be seperated into different IOMMU groups to pass through Eg. from my server, Have you tried PCIe ACS overide in Settings > VM Manager ? This may break the devices into more manageable groups although it is a bit of a hack which tells the OS to assume it can talk to devices independently when they aren't actually reporting that as possible so your milage may vary. You may need to check the BIOS for any other functions in addition to VT-d which support virtualisation as some BIOS need multiple settings to fully support VM's. 'IOAPIC 24-119 Entries' - 'Enable' may help as it uses an alternative to IRQ for multifunction devices... but that's a long shot, I suspect it is more linked to the cards sharing the same IOMMU.