Jump to content

fluisterben

Members
  • Posts

    111
  • Joined

  • Last visited

Everything posted by fluisterben

  1. Why is it bad to wipe the new devices? There's nothing on them. So, here's what I've done so far; - I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10). - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. The array started, but I can't do anything regarding disks, because it says; "Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs. and under the Cache it says "Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed). Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not?
  2. Yes, I did try the Unbalance plugin, but it keeps telling me about permissions and errors, which simply aren't valid (I've thoroughly checked) and then it doesn't allow me to unbalance a drive out, so to say. Still, the array rewriting every sector of a new drive seems horribly overkill. I'm more for the way StableBit DrivePool does it, where it basically allows you to say which dirs need to have which amount of copies in the pool, and each drive's content is accessible separately. In fact, when I first started with unRAID, I thought it was more similar to CoveCube's DrivePool, turns out it isn't, it's just another RAID array, and frankly, even the name 'unRAID' isn't really appropriate. All it is, is a GUI for 2 raid arrays (the cache and the parity-controlled array..). There's what I think is really missing in unRAID; I get notices that a drive has a growing amount of bad sectors and errors, and then there's nothing that tells me how to save the files that are not corrupted yet on that drive, it just leaves me with "bad drive bad drive red alert!", honestly, that's just not the way to go. It should have a button by which to safely decommission the drive and safeguard its content. And then suddenly the GUI isn't friendly anymore, and we need to go to a shell and dd or ddrescue and such. It's such a linux-disease, pretending to offer a GUI and user friendly everything, and when push comes to shove we all need to be sysadmins and go shell-scripting again. Don't get me wrong, I like being on a shell with ssh, but it's not unRAID's intended use-case.
  3. This is really a missing feature! Knowing that the array has more than enough free storage space to entirely de-commission a drive from the array, it would be best to be able to invoke moving its data before taking out the bad drive. This also highly speeds up the data restoration when a new drive is put in, since there's less R/W to be done.
  4. OK, so I can rely on unRAID knowing which copies on the array of the files are in tact? Some will presumably be corrupt because this disk5 is quicky dying on me. I will take it out assuming the parity knows.
  5. In Covecube Stablebit DrivePool I can then decide to remove a drive, with these options (see attached). Is there something similarly easy in unRAID to kick a bad disk out? I tried using the Unbalancer plugin for that, but it gives me so many errors I can't even solve (they're probably not even correct, since they're not possible) that I'm not sure that is a thing to use for this. Sure, I can copy or move data from /mnt/disk5 to /mnt/cache or something on commandline, but that too seems not the way to go, since I'm not sure how unRAID then knows what happened..
  6. Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)? I need to move data from 2 old SSDs to 2 new SSDs (where the 2 old SSDs are part of a 5 disk SSD RAID10 array..).
  7. So, I currently have 5 SSDs in a Cache btrfs raid10 array, superfast, very happy with it. But I need to remove a controller card, which powers 3 of the 5 SSDs currently. Those 3 powered by this controller are M.2 NVMe, and only one of those M.2s can remain by putting it in a slot on the mainboard. So I'll at least try and move that one. This leaves me with 2 of 5 cache drives suddenly missing. What is the best procedure here for this? (Besides of course making sure I copy the data from the cache array so that I have a backup.) Should I first convert the cache array to, say, raid 0, as to not lose anything when removing drives? And yes, I'll be adding 2 (new) SSDs, so I can recreate the cache with 5 media again. Not sure about the order of doing all this though. Anyone with experience doing this?
  8. Thus far no issues or errors, everything in the green and VMs are very responsive. Did not have proper performance tests beforehand, but VMs definitely slower and lagging before I pinned the cores, which is all gone now.
  9. Just wanted to add how I expanded on this a little; Here's my /boot/syslinux/syslinux.cfg default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 30 label Unraid OS menu default kernel /bzimage append tpcie_acs_override=downstream,multifunction isolcpus=2,8,4,10 nohz_full=2,8,4,10 rcu_nocbs=2,8,4,10 initrd=/bzroot label Unraid OS GUI Mode kernel /bzimage append pcie_acs_override=downstream,multifunction initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest so, as you can see, I added; isolcpus=2,8,4,10 nohz_full=2,8,4,10 rcu_nocbs=2,8,4,10 in order to have VMs use two hyperthreaded pairs, 2,8 and 4,10 (of the 11 cores total in the Xeon used by me) First booting after I'd set this showed me a nice GUI dialog I didn't know existed in unRAID, warning me some docker was pinned on one of the cores that was in use by a VM.
  10. What do you mean by that? The domain share is set to "Prefer cache", not sure what that means for a vm img of, say, 32 GB, will it then always reside on cache SSD/NVMe ? And again: Isn't each write within the VM routed to cache either way? Why would it perform better for the entire image to be stored there? Only at start/boot and at shutdown that would save you a second or something, not?
  11. A gzip tar is corrupted. I tried finding out which one it is, tried downloading a fresh copy of the flash unRAID*.zip from your link on aws but the download keeps failing, stalling at around 127 MB. Any insight on this error?
  12. Somewhere in the forums I read that I 'should' use /mnt/cache/domains/debian/vdisk1.img rather than /mnt/user/domains/debian/vdisk1.img but why ? The total cache storage size is filling up. If I set it to read from cache, what exactly does it do different? Whenever something gets written from within the VM to its disk, doesn't it use cache either way? <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/debian/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk>
  13. Same here, UPS and a very reliable 220V mains network too. Barely ever any outages that were not planned or announced beforehand. There are things you can set in /etc/sysctl.conf to make more use of RAM, but I did that already, and even then it still uses only around 12 GB max. This is a linux issue in general, I've noticed. The distros all claim they cache and use tmpfs, all the while in reality you barely see the RAM having much I/O over time. As soon as you manually create and mount a RAMdisk, it suddenly changes. So I'm guessing that's also the case with slackware and unraid; You have to create a ramdisk with a static size and name, then have unraid use that as its preferred cache storage medium.
  14. My home network mostly runs on 10Gbit ethernet cables, of which the unraid is even connected to 2 NICs using aggregation of the 2. My unraid server has 32GB of DDR4 in it, of which it barely ever uses more than a third. Which surprised me, but that's another topic. So; I'd prefer my RAM to be always used to its max, and therefore I'd like to either have a RAM-disk set to (for example) 20GB of the free RAM left at all times, or have the OS arrange that maxing it out automatically for me. I'd say having RAM be used for I/O of data-streams is highly preferred over using the SSD storage, while unraid's slackware does not seem to do much caching in RAM. I've been monitoring the server's RAM usage with netdata over a month.
  15. You need to grow up and stop yelling like a toddler. And mind your own business. http://xfs.org/index.php/FITRIM/discard > "The kernel must include TRIM support and XFS must include FITRIM support (this has been true for Linux since v2.6.38, Jan 18 2011)"
  16. Anyone of you care to make a functional stable plugin out of this script and this other script below?
  17. "All that" ? Looks to me like a little config editing and you're done. Seriously, it's 2019 and Unraid does not support TRIM in its array? This should at the very least be mentioned before trying to sell it.
  18. Isn't it about time this gets fixed then? And by fixed I mean make sure SSD's can be fluently added and maintained by and for an unraid array? By now this is really becoming an unwelcome feature of Unraid. Funny how I totally not expected this to be there, since it is called Unraid, I did not expect Unraid to have the exact same issues of any other RAID array. (Noteworthy here is CoveCube's StableBit DrivePool, which of course has no issues with SSD whatsoever, and I would hope unraid would go their route, or at least optionally) I think this should be mentioned loud an clear frontpage when you buy a license; SSD not supported as part of an unraid array.
  19. Must have been asked before, but could not find it. I currently have quite a few SSDs that are used with xfs in the Unraid array. Is it still not recommended to do so, and if so why exactly? Because there are the TB NVME M.2's now, and the hybrid HDD/SSD media. Either way, thus far I have several types of SSDs running just fine in the array, except for the warnings.
  20. That would be the case if it was logical division (x8 and x8 bandwidth divided over the x16 + x8 slots), but I don't think it is. I'll let y'all know what it ends up doing.
  21. I don't think so, the PCI-E x16 slot is already set to x8/x4/x4 mode in BIOS. Check this mail correspondence I had with ASRockrack about it; > -----Original Message----- > Subject: RE: #ULTRA QUAD M.2 CARD# compatibility motherboard > > Hello, > >> My colleague from ASRock Rack borrowed me an E3C246D4U. >> The bifurcation settings that are available for the PCIe 3.0 x16 slot are: >> X16 >> X8/x8 >> X8/x4/x4 >> >> As far as I can tell, the C246 chipset does not support x4/x4/x4/x4. > > Is this not fixable by a workaround in BIOS? > Or truly a strict limitation of the hardware for C246 ? > >> I tested with an Ultra Quad M.2 Card on this motherboard, filled with >> 4 NVME >> M.2 SSDs. >> >> Set PCIE6 Link Width to x8/x4/x4. >> >> As expected, 3 out of 4 SSDs were detected (sockets M2_1, M2_3 and >> M2_4 on the Ultra Quad M.2 Card). > > OK, so I can run 3 of the 4 slots. At least that's something. > I was not aware of such strange limitations for a very current Intel chipset. > I truly don't understand these manufacturers of chipsets. > Why would you create support for x8/x4/x4, but not x4/x4/x4/x4 ? > Seems to me that's just some strange management decision, not an > actual hardware limitation. > > But hey, thanks a lot for this valuable info! > > Regards, Hello, I would think it is an artificial limitation built into the chipset/CPU, to differentiate between different price ranges/classes. To be sure I have asked ASRock Rack R&D to check. When I get a reply I will let you know :) Kind regards, ASRock Support
  22. Ah, that's actually a real valid reason I would not immediately think of. I'll do some switching around here between that older LSI Logic MegaRAID SAS 8308ELP and a LSI SAS 9207-8i HBA (I now have in my main desktop with Covecube's StableBit DrivePool). The only issue is that this last card is PCIe x8, this means I have to put it in the x8 slot on this Asrock E3C246D4U serverboard, on which it is shared with the PCI-E x16 slot through a "PCI-E switch" straight to/from the Xeon. The x16 slot now has a card with 3 M.2 NVME SSD's, and the manual says the x16 slot will auto-switch to x8 when that (other) x8 slot is occupied. Not sure if the ssd card is using the full x16 bandwidth. Will have to check..
  23. I can set it to RAID-0 in LSI BIOS. I used to have an older HP raid P222 controller card that I simply set to use RAID-0 for each drive, which worked for other linux software, so I'm hoping this will work.
  24. Do you think an LSI Logic MegaRAID SAS 8308ELP card will work with the current kernel in unraid? It's an older type SAS card with PCIe x4 (not x8) pins. Which is also exactly what I need, since that x4 port is what is still free in my mainboard. Unclear to me what chipset is used in this card (the chip-info is obscured with a marker).
×
×
  • Create New...