SteelCityColt

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

0 Neutral

About SteelCityColt

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've been using the latest RC but 6.9.35, but want to return back to stable. However when I select the option from dropdown the page refreshes without showing the stable version. I can at most roll back to 6.9.30. Any ideas?
  2. Good news. I narrowed it down to a 12V fuse that seems to have blown on every PCB. So far have managed to remove and solder a replacement on one and the drive is alive and seen again in unRaid. I think I'm back in business! Lesson learned around PSU and modular cables.
  3. I now know not to do this! Learning the hard way. Hoping it's just fuses blown and I have the skills to swap out. I'll post results of the effort in a few days.
  4. Thanks for the reply. These were the connectors I was already using to power the drives. Further investigation after taking 2 of the PCBs off the back shows it looks like 12V fuses have shorted. Going to give it a go soldering a replacement to see what that does. The only thing I can think of then is either the PSU is faulty, or when I first plugged the drives in, I used the wire from the previous modular PSU. Apparently this can cause issues.
  5. So I'm not really sure what has happened or why. I had need to change my hardware and everything was going swimmingly till I was getting no hard drives showing up when I fired up the MOBO/CPU. unRAID booted fine, but nothing was showing up barring the 2 caches drives (NVMEs). Bit of head scratching till I realised having a M2 in slot shuts down the PCIE slot I was using for my HBA. Cool! Except it now seems 80% of my drives are dead dodo. Out of: 4 x WD 8TB 2 x WD 10 TB 1 x WD 6TB and a 240GB SSD I can only get the 10TBs to power up. Everything else is dead. I've
  6. Due to issues with my AMD mobo (ASROCK TRX40 Creator) not playing nicely in passing through USB, I've only been able to pass through one USB C port on the mobo itself, and also a standalone PCIe USB controller. This all works but I've seen some very strange behaviours with USB devices: Two separate USB hubs, one plugged into the USB C, and one into the USB 3.0 card have both died within days of being plugged in. Could be coincidence, but seems strange. Trying to plug an external DAC in and using in 5.1 mode, throws up a "not enough USB controller resources" error. That's even
  7. After a bit of a journey I've finally got my first W10 VM up and running, but I have one slight issue. Every time I do anything that pauses the machine, when I go to "resume" I get this "internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required". I've tried recreating from scratch to see if it was just a quirk of the VM but same thing. Aside from that everything else is working well. Any ideas? VM: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>windybeaver</name> <uuid>d5
  8. @bastl Sorry I didn't twig straight away you're on a different mobo. I don't have the equivalent controller to pass through sadly. So experimentation has show that 6.9 beta solves all issues, but I'm a bit twitchy about running a beta on my main server. I've got a PCIE USB card landing today that I'll try and maybe use as a temporary solution until 6.9 stable is released.
  9. Apologies for jumping in on this topic. I'm slowly losing my mind with my TRX40 board (AS Rock TRX40 Creator) and 3960x combo. I want to be able to pass through a GPU (The only one in the system), a USB Controller, NVME, and the onboard sound. Every time I tried to start the VM it killed unRaid to the point of a hard reboot. It's also killed the flash drive twice, requiring a rebuild. Things I've worked out so far by trial and error and lots of VM config tests: 1) The original GPU (Vega 56) just wasn't having it. Swapping to a RX580 works. 2) Passing through the NV
  10. I'm taking first tentative steps into a one box solution but I'm struggling with my first attempt at a Windows 10VM, I have watched a few Spaceinvader One videos but everytime I try to start the VM... it breaks unRaid. Not a hard crash, but the UI gets much slower and the VM/VM Manager pages refuse to load. If go to the dashboard the VM in question will show as paused. I'm starting to suspect the issue is trying to pass through the sole GPU (Vega 56). I have tried passing switching my mobo (ASROCK TRX40 Creator) into legacy mode, and passing through with VBIOS, but no dice. Before I
  11. I'm currently running unRAID on a HP Microserver which has served me well but I'm considering migrating into a bigger chassis as the 4 drive bay is a bit limiting. Current specs: Intel E3-1265 V2 (I jury rigged some active cooling but it still runs a bit warm because of the chassis being so condensed and full of gear). 16GB RAM 3 x 8TB WE Reds, 1 x 6TB WD Red (16TB used of 22TB but my hoarding is increasing exponentially) HP Smart Array P222 of which I run 4 x SSDs, 2 for Cache in a pool, 2 for VMs Aside from UnRaid also used currently for 1 x Ubuntu VM but I wo
  12. I am indeed an idiot. Thank you, back into the WebGUI now! However, as soon as I set one NIC to a static IP again, it crashed the webGUI and won't let me back in again. Noticed running ifconfig there are some legacy interfaces still set up. How do I remove these for good?
  13. Logged in via console on server to delete said files... Guessing that's not right?! Or am I being dumb?
  14. Without wishing to necro this... the problem has got worse. Messing around I set each NIC in turn to automatic. When I got the eth2 the web gui hung and I wasn't able to get back in. Reset the baremetal server, and no dice still. Going in via the iLO console, I can see unRaid boot up as normal, aside from "device br1 not found" (paraphrasing as not in front of machine), and it tells me that the server is on 192.168.0.21. But I can't ping/SSH this IP, or any of the other NICs. I can see them asking for, and receiving, leases from the DHCP server though. Better off to b
  15. Not sure why this has suddenly gone cockeyed, but I normally set my NICs up as statics from an unRaid side. But when I do this it seems like the routing goes a bit weird. When set to static with the router set as gateway and DNS I get this in routing and the interface will show as down with the message check cable. If I say change etho0 to automatic (and use the router to set the static IP by MAC) the routing updates to this and I can break out correctly via the router: Here's the example network setting for one of the NICS: