xylcro

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by xylcro

  1. Update: plugged the card in with 2 nvme's and it just worked..... link speed downgraded to x4 for both ssds as it should and the other pcie slot on the same IOU bridge still gets its full x8. So yeah... I was stressing out for nothing pretty much. Thanks for the help!
  2. Alright so I have to choose the correct IOU bridge and it's not labeled anywhere as to which bridge goes to which slot. It's going to be trail and error it looks like... Will update once the carrier card gets here.
  3. Ah. That's not possible I think, I can't actually set bifurcation per slot, I have to select an IOU to set bifurcation, and I think that's a group of slots instead of just one. I'll look again in the BIOS once Unraid is done moving stuff.
  4. Maybe the "PEG1" is IOU0 and so on... That's the only thing in the manual I could find. If it isn't that I'll try one by one. This I don't understand, what do you mean "should be per slot"?
  5. I know it's supported, I'm just unsure about 2 things: 1. which IOU I need to select (see screenshot) 2. What will happen if I choose 4x4x4x4 for example: will the rest of the slots in that IOU also go to x4 mode? I'd like to not do that since that would mean slower speeds for the GPU/10G/HBA
  6. Hey everyone, hope y'all are doing fine today. I'm in need with some help for my new Unraid build. So my new build has a X11SRL-F motherboard paired with a Xeon W-2255 CPU and 128GB of memory. I used my Broadcom 9305-16i from my old server 'cause it still works and there's no need replacing it. I'm also using my Quadro P2200 for transcoding (which is probably overkill but hey). I decided to purchase an Intel X520-DA2 for that 10G uplink. For cache I currently have 1 x 1TB SSD for downloading stuff, the mover moves everything over to the array daily. Also in there is 1 x 500GB NVMe drive that I use for Docker, VMs and Plex metadata. However I want to move both SSDs to a Raid 1, with the normal SSDs this isn't a problem, with the NVMe it is since I only have 1 M.2 on my motherboard. I already have a AOC-SLG3-2M2 M.2 carrier card on the way but I don't really know how the bifurcation would work since I've never done that. As I understand it it splits the PCIe slot into 2. What slot do I need to put this into though, my x16 is occupied by the GPU, can I just plug it in to an empty x8 slot and bifurcate that slot? The manual of the X11SRL-F isn't all too clear on how this works... Both nvmes would go into the carrier card so the M.2 on the motherboard would stay empty. Thanks for the help.
  7. For the last couple of days I've been researching a new Unraid server build to replace my current Dell R510 one. The Dell R510 is going to become my backup server. I've been looking at a ton of motherboards but just couldn't really figure out the 'best' choice. Mainly what I want from this server is storage, I have my eye on this InterTech 16 bay chassis, now I just need some hardware to go with it. Since the chassis is a 16 bay the motherboard does need to be able to handle 16 drives, plus 2 for cache. I do already have some hardware that may of may not be sufficient, but I honestly don't know. Hardware I already have: Quadro P2200 (for HW transcoding, so new server doesn't need to be Intel) Asrock Z97 Extreme 4/3.1 (my old PC) Intel Core i7-4790K 32gigs of RAM I've wanted to use this hardware but I've read on the Asrock website that when all PCI-e slots are populated the slots will be x8/4/4. The chassis has 16 drive bays with all 4 backplanes having its own SFF-8087 connector , plus 2 cache which means I can plug the first backplane in the first 4 SATA slots with a cross-over cable and the 2 cache drives in the remaining 2 SATA ports. That leaves the other 3 backplanes. I can either buy a SAS Expander (I also already have a Dell Perc H310 flashed in IT mode somewhere), however I don't really know how that works since I want to use the last PCI-e slot for a 10gb NIC. Or I can buy a 16x LSI HBA and still use the last pci-e slot for the 10gb NIC. I however don't know if it's going to be bottlenecked by the x4 speeds, I don't know a lot about that. If the stuff I have won't work, any recomendations for new hardware? Thanks.
  8. I just ordered my Ellipse Pro, did you get it working with unRaid?
  9. I tried a couple of USB 3.0 drives and got an error on everyone of them, finnaly I tried a random USB 2.0 I had laying around and it worked on the first time. So maybe if you're using USB 3.0, switch to USB 2.0
  10. I was finnaly able to use another USB, USB 2.0 this time. No errors this time!
  11. Hello all, I just upgraded my Unraid server to version 6.9.2 from 6.9.0. After a reboot I was greeted with a kernel panic: kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) After flashing and testing another flash drive, I got the same message. I then flashed version 6.9.0 back on to the previous flash drive and everything is now working again. I am yet to run memtest, doing that when I go to bed in a couple of hours and leave it overnight, but I do have ECC memory so isn't that supposed to self-correct errors? As I don't really have the money to buy a new server, I want to keep this *** running for as long as I can. So if it means not upgrading, so be it. Attached is a screenshot of the kernel panic.
  12. Wow never saw that on a new drive. Asked the vendor for RMA. Currently in the process of copying everything to another drive. Thanks so much!
  13. So I recently bought a Dell PowerEdge R510 with 12 drive bays. The controller is a PERC H310 flashed in IT mode. I plan to populate all 12 bays with 6 TB SATA drives. Right now there is 1 (NEW) 6TB drive already in there (parity drive is on its way), but there are already disk errors present. When I do a SMART self test, short or extended, it fails after 10% with error "Errors occurred - Check SMART report". I'm still learning all this and I don't really understand what the report is saying. I've already replaced the H310 with another I bought (also in IT mode), but no dice. Is it really the drive? cerberus-smart-20201004-1456.zip