Jump to content

Decto

Members
  • Content Count

    121
  • Joined

  • Last visited

Everything posted by Decto

  1. May be worth trying 2GB Ram as XP can be fickle with its 4GB address space as it is shared with GPU and system devices. I'd also setup a dummy VM with the GPU pass through and seabios. It doesn't need any disks. When started this should initialise the GPU and give you a 'No bootable device' on the screen just like a normal PC. That should give an indication of its a Windows issue or a pass through issue.
  2. You can use the titan, anything with a monitor output is sufficient. I have a few GT710 from $12-15 off ebay to fall back on. The use a lot less power than other cards if you only need screen.
  3. $175 on amazon now, nothing wrong with it, Toshiba's NAS drive
  4. Hi, I've been having USB passthrough adventures myself this week. My board has a pair of Asmedia controllers that I can't pass through reliably, they always work with pen drives, but frequently won't recognise keyboards/mice If I transfer unraid to the Asmedia ports, I can pass through the full intel controller (3 devices) and that works reliably. I also have a inateck KT4001 4 port PCI-E x1 card with the Fresco Logic FL1100 controller which also works fine, though you have to connect SATA/Molex power to the card unless only connecting to a powered hub as it doesn't take port power from PCI-E slot. Lots of these about cheaply under different brands and seems to work fine with Unraid 6.8.3. Just look for one with the FL1100 chip. Note it really is only 1 port controller with a 4 port hub but it works with many devices both USB 2 and 3. I've had it copying files from a HDD to a pen drive with two seperate keyboard and mouse set plugged in and I didn't manage to break it. Getting stuck at the TianoCore circle is unusal, I've only seen this when I misconfigured BIOS seabios vs OVMF etc. or done something else to break the XML. If it gets funky I just delete the VM (not the Vdisk) and set up a new one as it only take a couple of minutes. if you remove the USB pass through does the second VM boot? Have you confirmed it really is identically configured (except the USB). I've seen things like leaving virtual CD drives mounted cause issues on reboot, also using virtoIO hard disc driver when I'd used SATA in setup etc. Easy to miss the details. I'd start off with another copy VM, make it identical to your good VM and pass through to the same disk. Confirm this boots OK without the USB passthrough, then try passing through the card. If that doesn't work then it may be worth checking your native IOMMU groups (without ACS overide). If there are a lot grouped devices with the card then try different PCI-E slots if you have them to see of the others are more favorable. Other than that, the logs may show where it gets stuck.
  5. Might be worth a look here as your sys log is full with thousands of lines of: Sep 20 14:24:23 Unraid kernel: vfio-pci 0000:01:00.0: BAR 1: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]
  6. Over in UK, I can get the 8TB WD external drives for ~£120-£130 on sale or from EU, just have to wait for the very common sales. Cheapest bare drives are £180 for the Segate SMR, while Ironwolf are £200 and WD Red are £220. I've shucked ~20 drives in the last 12 years and only had one fail in use which was a 500GB Seagate 7200.10. Even then, it didn't fail but it developed a hum and has since been used as a scratch drive on the bench. I think it didn't like mounting on it's narrow edge as it's quiet flat. I'll take the chance on warranty rather paying top price to get back a refurb warranty drive I wouldn't really trust. For the WD shucks I was getting WD80EMAZ or WD80EZAZ which which both have the Helium counter in SMART so are likely the HGST Helium drives. I did have one very recent drive which is WD80EDAZ, Goggle suggest this is the 5 platter air drive. I can confirm it has no Helium counter in the SMART and runs ~5-6C warmer that other drives in the same enclosure and acutally increased the temp of the drive next to by 2-3C. Most likely this is the newly launched WD Ultrastar DC HC320 8TB SATA Enterprise HDD 7200rpm HUS728T8TALE6L4. I had to place a fan on the enclosure during a pre shuck test as the drive hit 60C in the plastic shroud without, In my enclosure it's below 40C in a parity check so not really an issue in a case with reasonable cooling. Iron wolf, Iron wolf pro are stated as CMR, HGST He8 is also CMR PMR is Perpendicular Magnetic Recording, effectively it's a way to make the read/write head smaller by using a vertical alignment. To my knowledge you can use PMR heads to write CMR (parallel) or SMR(shingled) tracks to the disk. Unraid doesn't need matching drives so to avoid a bad batch, you can always buy a mix of drives. My 8TB drives were bought over ~12 months as I upgraded from 3/4TB.
  7. AFAIK thats a SMR drive so I wouldn't use it in parity, I've used SMR for data and they were OK. Personally I've been shucking 8TB WD Mybook or Elements which had Hitachi He8 drives in, however these are now being replaced by a 5 platter drive which uses air rather than helium and runs 5-6C hotter. Warrenty will be void but the retail drive has 5yr so likely to reasonably robust in unraid.
  8. Do you have a monitor or HDMI dummy plug connected to both GPU. My Nvidia card doesn't fire up for streaming without a 'screen'
  9. Hi, Pending sectors are considered 'suspect' by the drive. If the drive subsequently reads OK from them, the pending count will reduce, effectively they will disappear. If the drive continues to be unable to read from them, they will be remapped. In this case the pending counter is reset to zero and the remapped counter incremented. Looks like your drive is starting to fail, but not consistantly enough for the drive to remap the sector as yet. https://kb.acronis.com/content/9133
  10. If you want to move docker and vm's to cache then you need to disable them first, check the share is set to 'cache prefer' (move from array to cache) then run mover. If the services are running the mover won't touch those files.
  11. The soon to be released high end RX3080 / 3090 have TDP of 320W/350W respectively. The still very powerful RX3070 (supposed 2080Ti performance) is a more reasonable 220W. AIB cards can be higher than that, some of the current Radeon 5700XT are hitting 300W and likely it is similar of the OC versions of the current Nvidia cards. The current high end Intel CPU's are hitting 200W+ for the turbo period, or continously if the Turbo timer is deactivated in BIOS. Same with AMD where the 125W CPU's hit ~170W in the short term boost. You mostly won't see this when gaming as it tends to be GPU bound but there are likely to be peaks. I'd expect this is only an issue if you are gaming when you have all 15 drives spinning. E.g. during a parity check or rebuild. A parity check with a mix of 4 and 8GB drives takes me around 17 hours. 850W is a good balance, I was planning for 850W but due to Covid releated supply issues and pricing, a 1000W from a high street retailer was cheaper than any of the 750W - 850W models I was looking at on any of the common online retailers and platinum vs gold. Only downside was it's 200mm long. Pricing and supply looks to be getting back to normal so may not be such an issue now.
  12. Hi, Power supply looks fine, it has 4 outlets for peripherals. I'd aim for 4 to 6 drivers per outlet. 6 Drives @ 15W is 90W on a single conductor wheras the 3 conductor PCI-E cable is rated at 75-150W so the load adds up. In reality WD specs the 12TB drives at average 6W read/write but others are closer to 8W, it just the start up current that can be high. If you think you may add a mid range GPU at some point, it would be worth looking at ~850W. I have a semi modular seasonic core gold in a desktop PC, I wouldn't recomend that for your use as it only has 2 peripheral outputs so the load would be too high.
  13. A few observations. The CPU / board combination would benefit from quad channel ram so 4 sticks and 32M as a minimum. The RTX20xx range is being replaced right now with the RX3070 range where the RX3070 ~ RTX2080TI performance at 2070 prices so I wouldn't buy that generation unless it is very well discounted. For Plex transcoding, I don't think the CPU has IGP so you are working with 1 GPU. if you pass through the GPU to plex, it's not available to VM's, if you pass through to the VM's, then plex can't use it. Favoured card for heavy plex usage is the Quadro P2000 as it has unlimitted streams whereas all other cards are limmited in software to 2/3. How are you planning to use the VM's Will these be used concurently .. multiple users etc. or are you just launching different VM's for different activities. This is quite a high end setup for light gaming and a couple of work/browsing VM's. What is it that your current setup can't do?
  14. Both parity drives need to be the same size or larger than any other drive in the array. If you want an 8TB array drive they both need to be 8TB
  15. Many people use unraid on their main PC and may have multiple VM's to use. E.g. with the same hardware you can use OSX, different Windows versions, Linux etc. and hop between them quickly. with many core processors becoming common, plenty of headroom to run unraid and all the dockers in the back ground then game, work, etc on VM's which are then not cluttered by 100 apps. My server is remote, I use parsec (a casting app) to steam games to a low powered laptop for the kids. Another VM streams my games to the shed or wherever I am with some low powered hardware.... celeron J1900 in a NUC type device works fine as a client. I also have a finances VM which I use for my banking etc. and it never does anything else. I also have a sacrifical VM or two which I'll use to open and scan files from torrents or trying to find files, drivers etc. from those sites that are deliberately obtuse. Then there are mutiple VM's for distros I've tested etc. While all these could be on my local machine, having them on server means I can access them from any PC in the house or even remote in via VPN when I'm away from home. I agree that docker elimates a lot of VM's, but VM's are what made me upgrade from a more basic file server to something with more power.
  16. Hi, If you have all but one disk and the parity is good, the array should be recoverable. Best advice, leave it well alone until one of the experts drops in with some guidance or you are very likely to lose some data. It can take a day or so to get support depending on time zones etc. Good luck.
  17. I have two LSI HBA, one internal, one external. Makes no difference if drives are moved between these or even to Sata as they are both flashed to IT. Take a screenshot of your array before you start, ideally with the array stopped then if there is any translation issue, you have the id's and slots for all drives.
  18. As I understand, you get 2x8 lane PCI-E 4.0 to the CPU slots and the chipset gets 4 lanes of PCI-E 4.0 The chipset then provides a x8 link to the 3rd GPU. The chipset connection is x4 PCIE 4.0 which is same bandwidth as x8 PCIE 3.0. This third slot bandwidth is shared with other devices but is currently the best option for 3 GPU as you get up to PCIE gen 3 x8 bandwidth depending on contention from other devices whereas other boards give you at best a 4x electrical connection so you are limited with a PCIE 3.0 GPU. Theoretically, if your VM uses the 3rd slot, along with sata/nvme also on the chipset, direct memory transfers reduce requirements on the cpu link. Bifurcation would give you x4 links at PCIE 3.0 as the gtx16x0 cards are pcie 3.0
  19. Bifurcation while possible is a bit of a lash up. Better to buy a board designed for 3x pcie x8 electrical. Benefits of PCIE gen 4 bandwidth. Asus Pro WS X570-ACE
  20. I'm having a similar experience with USB drives. My current server has been running happily on a 16GB USB 3.0 Sandisk Ultra Fit for around 3 years. I'm in the process of upgrading to a new server so I'm running a trial with the intent of converting this to a basic licence and to reuse an old HP N40L as parity protected cold storage. I like the small drives as they are difficult to break, though could use the internal USB port as discussed here. Anyhow, the 16GB USB 3.0 Sandisk Ultra Fit weren't available so then the fun started. 16GB Sandisk Ultra Fit USB 3.1 (all black) - No Guid 32GB Sandisk Ultra Fit USB 3.0 (old one with metal insert) - won't boot in any device, these look identical to the 16GB that works in any device. 32GB Samsung Fit Plus USB 3.1 - won't boot in the N40L at all, will only boot in SuperMicro X10 if I enable 'allow UEFI' in customise during creation. The 16GB Ultra Fit boots without. A 16GB USB 2.0 Cruzer Fit arrived today to I'll see if that still works, and maybe stock up on a few spares for the future as they are very cheap in this size. I think larger is better to a point as it helps the wear leveling even though the number of writes is small. I'll also take a look at the Kingston SE09, I have at least one genuine USB 2.0 of these, the other looks fake as there is only a logo on one side and I had a couple stop working in general use but these could have been fake. I also have a couple of later USB 3.0 versions. The triple pack could be useful if the GUID is common to all sticks and compatible with Unraid. Could these be a good backup option? If a stick went bad, you could just copy the latest backup to a replacement stick without needing to go through licence replacement which could be handy if you're running through PFSense or something on the server.
  21. I am just setting up something similar, though the lightweight gaming will be streamed in the local network using Parsec. Often for Roblox type games where they have some sort of clicker to keep scoring points while AFK. You can use almost anything as a client included Raspberry PI 4, though I have yet to try one. I'm using a much older Xeon E5-2660 10C 20T with 64GB quad channel DDR as it gave me 4 x PCI-E X8 electical slots and 10 native SATA. 1) Not likely, as I understand it's all up to the BIOS. You can often use 'ACS overide' to further split them out. 2) I tend to avoid bleeding edge hardware. The kernal unraid is built on lags a few versions behind so some features may not be supported immediately. Intel /Nvidia generally has less issues than AMD at this time. 3) I've had up to 4 GPU's in the system, 2 x Gaming, 1 x Linux VM, 1 x pass through to plex. I had both gaming and the linux VM running together but haven't yet moved my Plex over as I am busy consolidating drives for a switch from an old Dell T20. 4) X8 PCI-E is fine for mid to current high end GPU's. A board set for X8 X8 SLI / Crossfire should be fine. Some more basic boards do X8 / X4 and will have a performance hit on the second GPU. 5) The system doesn't care if the hardware is the same or not, it will just be a device referenced on a bus. Saying that, when setting up or changing a VM it is much easier to select between 'Gigabyte 1650' and 'EVGA 1650' than 'EVGA 1650' and 'EVGA 1650' as there will be a video and audio device for each card. There are bus numbers that help give a unique ID but a brand is simpler. 6) You will get sound over HDMI from the GPU if selected as the GPU has audio build in, the mainboard will have one audio device which cannot be shared with more than one VM. You can usually split out some USB in the IOMMU groups and pass through, I haven't tried this as yet since I am streaming from another room. Keep in mind you may want to add a separate USB Card for some extra USB pass through. You will only have a X1 slot and the bottom X16 (but not X16 electical slot). With 2 x VM drive, 1 x cache drive for host (recommend) and 1 x parity you only have 2 spaces for data drives. You can soon run out of expansion slots for extra cards etc. Likely you could buy good 1TB NVME drive, set it as cache and then put both the VM's on it. If you buy a mini PCI-E X2/X4 NVME with high IOPS it will likely outperform the SATA SSD's anyhow since they are limited by the SATA bus. You need to check if using some of the NVME slots disable SATA ports as often they share lanes. 7) No, but as above, 1 fast NVME drive 'may' be better and keep more slots free. Alternatively you could use NVME drives and pass them through for closer to metal performance. Personal choice really, you can likely get it to work fine, however you could probably build 2 lightweight gaming PC's with similar performance + a NAS in the same budget. 9) Overclocking is not recommended. Intel even consider XMP for memory as overclocking and will void the warranty if you tell them you did it on a non 'K' sku. In a storage server, stability is king. The OC versions of the GPU's are fine though. 10) don't under estimate how quickly your plex requirements can grow when you enable automatic media downloading through Sonar / Radar etc. As a precaution I'd plan for up to 12 drives as a mix of cache, VM etc. You don't necessarily need to buy the 12 drive licence yet, but think about power, drive bay spaced, sata ports etc. and plan for the future. Nothing worse the ripping it all out to start again in 12 months. Another tip, use the trial, extend it if needed. You can try before you buy so if it's not for you, nothing lost and you can use the hardware elsewhere. You can even hold off on the second GPU as it's easy to add in later. Get Unraid running, set up your VM's then expand. It's fairly easy to add a second GPU and then swap the assignment in the VM template. Good luck
  22. Reading with interest. If I understand correctly you have two molex cables from the PSU, one powers 3x4 drives and one power 2x4 drives. As each drive can take at least 1A possibly more, you have 12A+ on the 3x4 molex which has 1 active connector. PCI-E cables are only spec'd at 12.5A (150W) with 3 active connetors. I would run a third molex as a precaution so you are powering 2,2,1. There is a possibility the issue with the molex power expander is sensitivity to minor differential voltages due to it presenting a load and the end of a power cable while you have drives at the end of another heavily loaded power cable. Good luck
  23. A bit of a long shot. if you have enough PCI-E power plugs, can you try the AMD card in x16 slot 1, and the Nvidia card in x16 slot 2. Unraid should take the AMD card, then you can pass through the Nivida card without a VGA Bios. It may also be worth trying CSM - compatibility support module which can be enabled in BIOS A GT710 or even and elderly GT210 can be picked up for £10 / $15, I have a few as they make a good primary card as they are powered from the slot. You do need a dummy dongle to fake a monitor and make sure it powers up though ~ $5
  24. Hi, still learning my way in VM's but a couple of suggestions. 1) the XML suggests you are using cores 1,2,3,4 and the HT 17,18,19,20. If I understand correctly this would cross CCX's on threadripper or ryzen which can cause extra latency. Core 0,16 should always be left to unraid. Try 4,5,6,7 20,21,22,23 2) Your GPU pass through splits the GPU from a multifunction device into 2 devices in separate virtual PCI-E slots. This is my config with manual editing. You can see the addition of " multifunction='on' " in the first pass through (video) The next pass through (audio) has function='0X1' but has the same source and destination slots as above, it just uses the second function. The source is the physical hardware, the address is the virtual hardware presented. In your case the audio is presented as "function=0x0" on a separate slot. <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> video here:
  25. Hi, Some shucked drives don't power up if the 3.3V is present on the supply. Have a read here. Other option for testing include using a molex to SATA power adapter as these are only 5V and 12V.