Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About fluisterben

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. http://xfs.org/index.php/FITRIM/discard
  2. Does seem somewhat trivial to also support it for array devices. Just take the disk offline, trim it, put it back online. Samsung and WD's own SSD tools do this, keeping its contents alive and kicking.
  3. Must have been asked before, but could not find it. I currently have quite a few SSDs that are used with xfs in the Unraid array. Is it still not recommended to do so, and if so why exactly? Because there are the TB NVME M.2's now, and the hybrid HDD/SSD media. Either way, thus far I have several types of SSDs running just fine in the array, except for the warnings.
  4. That would be the case if it was logical division (x8 and x8 bandwidth divided over the x16 + x8 slots), but I don't think it is. I'll let y'all know what it ends up doing.
  5. I don't think so, the PCI-E x16 slot is already set to x8/x4/x4 mode in BIOS. Check this mail correspondence I had with ASRockrack about it; > -----Original Message----- > Subject: RE: #ULTRA QUAD M.2 CARD# compatibility motherboard > > Hello, > >> My colleague from ASRock Rack borrowed me an E3C246D4U. >> The bifurcation settings that are available for the PCIe 3.0 x16 slot are: >> X16 >> X8/x8 >> X8/x4/x4 >> >> As far as I can tell, the C246 chipset does not support x4/x4/x4/x4. > > Is this not fixable by a workaround in BIOS? > Or truly a strict limitation of the hardware for C246 ? > >> I tested with an Ultra Quad M.2 Card on this motherboard, filled with >> 4 NVME >> M.2 SSDs. >> >> Set PCIE6 Link Width to x8/x4/x4. >> >> As expected, 3 out of 4 SSDs were detected (sockets M2_1, M2_3 and >> M2_4 on the Ultra Quad M.2 Card). > > OK, so I can run 3 of the 4 slots. At least that's something. > I was not aware of such strange limitations for a very current Intel chipset. > I truly don't understand these manufacturers of chipsets. > Why would you create support for x8/x4/x4, but not x4/x4/x4/x4 ? > Seems to me that's just some strange management decision, not an > actual hardware limitation. > > But hey, thanks a lot for this valuable info! > > Regards, Hello, I would think it is an artificial limitation built into the chipset/CPU, to differentiate between different price ranges/classes. To be sure I have asked ASRock Rack R&D to check. When I get a reply I will let you know :) Kind regards, ASRock Support
  6. Ah, that's actually a real valid reason I would not immediately think of. I'll do some switching around here between that older LSI Logic MegaRAID SAS 8308ELP and a LSI SAS 9207-8i HBA (I now have in my main desktop with Covecube's StableBit DrivePool). The only issue is that this last card is PCIe x8, this means I have to put it in the x8 slot on this Asrock E3C246D4U serverboard, on which it is shared with the PCI-E x16 slot through a "PCI-E switch" straight to/from the Xeon. The x16 slot now has a card with 3 M.2 NVME SSD's, and the manual says the x16 slot will auto-switch to x8 when that (other) x8 slot is occupied. Not sure if the ssd card is using the full x16 bandwidth. Will have to check..
  7. I can set it to RAID-0 in LSI BIOS. I used to have an older HP raid P222 controller card that I simply set to use RAID-0 for each drive, which worked for other linux software, so I'm hoping this will work.
  8. Do you think an LSI Logic MegaRAID SAS 8308ELP card will work with the current kernel in unraid? It's an older type SAS card with PCIe x4 (not x8) pins. Which is also exactly what I need, since that x4 port is what is still free in my mainboard. Unclear to me what chipset is used in this card (the chip-info is obscured with a marker).
  9. I've just checked, because I already decided to move the card out of the unraid server, and indeed, it does work for that debian, but only since it has all virtualization disabled. Wasn't even aware of that (it needs to be switched on!).
  10. That's not true, it's probably based somewhere in slackware, because I had it working just fine in debian 9. And that workaround I already mentioned, it didn't make it work, unfortunately.
  11. Could you explain to me why one would want to 'preclear' other disks, since it is being written to and accessed by the exact same controls that the parity disk has on it. It's not like its controller does something different in either case, it asks for data and knows where it is on the storage medium, just like with any disk in the array.
  12. I asked because I have one of those controllers, a Delock Hybrid 4 x intern SATA 6 Gb/s card with the Chipset Marvell 88SE9230, not because I want to replace it with an even more expensive one. I thought this one was expensive enough. I need the virtualization in unraid, and I'm just very surprised people settle for switching it off for this bug. It shows all drives in BIOS, in other OSs, yet I don't see it with the slackware of unraid. I tried patching the firmware, to no avail, I tried patching syslinux.cfg, to no avail. I've searched for days all over the internet, it's not solved by anything I found thus far. This is really seriously bad, if I may say so. The reason I say this is because when it works OK elsewhere, we're not inclined to expect it will simply fail in unraid, and thus we've not returned the card(s) in time for a refund. This beckons a hardware compatibility fail list that should be in unraid's wiki.
  13. But why would anyone want to disable virtualization for the drives? This means they will not work in a VM, right?
  14. Similar thing; I've added a disk to my unraid server, it appeared as an unassigned drive, I stopped the array and replaced Parity 2 disk with this one. No question to have it formatted or cleared, it just was added when I assigned it as Parity disk 2 and started the array back up. The reason I didn't do a preclear is because this disk came from a HP server, which already had the disk tagged as 100% OK for XFS. Also, since my unraid uses xfs too, it only needs to 'quick format', because there's journalling and a TOC, XFS doesn't care if there are 0s or 1s on the surface, unraid doesn't care if it was a zero or a one and will just use its own roadmap for the new parity disk anyway. I don't really see an advantage in zeroing an entire 6TB 'to prevent errors' and add another bunch of useless write/read cycles to shorten the disk's lifetime. This, preclearing, in my opinion, is being highly overrated over in these forums. This is XFS we're talking about, and these are fairly new drives, with fairly new controllers and new hardware accessing them. Nobody at Western Digital/HGST, Seagate or Samsung will ever tell you to 'preclear' a disk for use in an XFS array.
  15. Not the right person to ask. I'm very much biased towards VM over docker instances. I have to work a lot with docker containers with devs and dev-ops stuff for work that I make money with. But I assure you; VM is easier to maintain in the long run. I always let out a sigh of relief when there's some issue with a VM, and not a docker container, containers are a PITA to fix problems for. Half the containers out there have missing access to parts of them. Half of those who are using containers don't know which parts that are mapped outside of the container will or will not be erased when an update of the container occurs. People that prefer dockers are usually making a spaghetti-code setup of dependencies, and forget to take notes, up to the level where they don't remember which container is the 'live' one. So many points of failure. So many ways to not being able to change config. Dockers are good for those who run software while developing what is on that docker, not for production level services, or set-it-and-forget-it apps like here with unraid.