Jump to content

fluisterben

Members
  • Content Count

    86
  • Joined

  • Last visited

Community Reputation

3 Neutral

About fluisterben

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. OK, ssds added to the cache pool and ran ~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v Dumping filters: flags 0x7, state 0x0, force is off DATA (flags 0x100): converting, target=64, soft is off METADATA (flags 0x100): converting, target=64, soft is off SYSTEM (flags 0x100): converting, target=64, soft is off which I'll have to wait and see if it works, but it looks good thus far. ~# btrfs fi show Label: none uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693 Total devices 7 FS bytes used 937.73GiB devid 2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1 devid 3 size 894.25GiB used 894.25GiB path /dev/sdp1 devid 4 size 894.25GiB used 894.25GiB path /dev/sdn1 devid 6 size 953.87GiB used 781.50MiB path /dev/sdj1 devid 7 size 953.87GiB used 781.50MiB path /dev/sdl1 *** Some devices missing Label: none uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6 Total devices 1 FS bytes used 1.62GiB devid 1 size 20.00GiB used 5.02GiB path /dev/loop2 Label: none uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65 Total devices 1 FS bytes used 604.00KiB devid 1 size 1.00GiB used 398.38MiB path /dev/loop3 I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there?
  2. Why is it bad to wipe the new devices? There's nothing on them. So, here's what I've done so far; - I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10). - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. The array started, but I can't do anything regarding disks, because it says; "Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs. and under the Cache it says "Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed). Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not?
  3. Yes, I did try the Unbalance plugin, but it keeps telling me about permissions and errors, which simply aren't valid (I've thoroughly checked) and then it doesn't allow me to unbalance a drive out, so to say. Still, the array rewriting every sector of a new drive seems horribly overkill. I'm more for the way StableBit DrivePool does it, where it basically allows you to say which dirs need to have which amount of copies in the pool, and each drive's content is accessible separately. In fact, when I first started with unRAID, I thought it was more similar to CoveCube's DrivePool, turns out it isn't, it's just another RAID array, and frankly, even the name 'unRAID' isn't really appropriate. All it is, is a GUI for 2 raid arrays (the cache and the parity-controlled array..). There's what I think is really missing in unRAID; I get notices that a drive has a growing amount of bad sectors and errors, and then there's nothing that tells me how to save the files that are not corrupted yet on that drive, it just leaves me with "bad drive bad drive red alert!", honestly, that's just not the way to go. It should have a button by which to safely decommission the drive and safeguard its content. And then suddenly the GUI isn't friendly anymore, and we need to go to a shell and dd or ddrescue and such. It's such a linux-disease, pretending to offer a GUI and user friendly everything, and when push comes to shove we all need to be sysadmins and go shell-scripting again. Don't get me wrong, I like being on a shell with ssh, but it's not unRAID's intended use-case.
  4. This is really a missing feature! Knowing that the array has more than enough free storage space to entirely de-commission a drive from the array, it would be best to be able to invoke moving its data before taking out the bad drive. This also highly speeds up the data restoration when a new drive is put in, since there's less R/W to be done.
  5. OK, so I can rely on unRAID knowing which copies on the array of the files are in tact? Some will presumably be corrupt because this disk5 is quicky dying on me. I will take it out assuming the parity knows.
  6. In Covecube Stablebit DrivePool I can then decide to remove a drive, with these options (see attached). Is there something similarly easy in unRAID to kick a bad disk out? I tried using the Unbalancer plugin for that, but it gives me so many errors I can't even solve (they're probably not even correct, since they're not possible) that I'm not sure that is a thing to use for this. Sure, I can copy or move data from /mnt/disk5 to /mnt/cache or something on commandline, but that too seems not the way to go, since I'm not sure how unRAID then knows what happened..
  7. Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)? I need to move data from 2 old SSDs to 2 new SSDs (where the 2 old SSDs are part of a 5 disk SSD RAID10 array..).
  8. So, I currently have 5 SSDs in a Cache btrfs raid10 array, superfast, very happy with it. But I need to remove a controller card, which powers 3 of the 5 SSDs currently. Those 3 powered by this controller are M.2 NVMe, and only one of those M.2s can remain by putting it in a slot on the mainboard. So I'll at least try and move that one. This leaves me with 2 of 5 cache drives suddenly missing. What is the best procedure here for this? (Besides of course making sure I copy the data from the cache array so that I have a backup.) Should I first convert the cache array to, say, raid 0, as to not lose anything when removing drives? And yes, I'll be adding 2 (new) SSDs, so I can recreate the cache with 5 media again. Not sure about the order of doing all this though. Anyone with experience doing this?
  9. Thus far no issues or errors, everything in the green and VMs are very responsive. Did not have proper performance tests beforehand, but VMs definitely slower and lagging before I pinned the cores, which is all gone now.
  10. Just wanted to add how I expanded on this a little; Here's my /boot/syslinux/syslinux.cfg default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 30 label Unraid OS menu default kernel /bzimage append tpcie_acs_override=downstream,multifunction isolcpus=2,8,4,10 nohz_full=2,8,4,10 rcu_nocbs=2,8,4,10 initrd=/bzroot label Unraid OS GUI Mode kernel /bzimage append pcie_acs_override=downstream,multifunction initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest so, as you can see, I added; isolcpus=2,8,4,10 nohz_full=2,8,4,10 rcu_nocbs=2,8,4,10 in order to have VMs use two hyperthreaded pairs, 2,8 and 4,10 (of the 11 cores total in the Xeon used by me) First booting after I'd set this showed me a nice GUI dialog I didn't know existed in unRAID, warning me some docker was pinned on one of the cores that was in use by a VM.
  11. What do you mean by that? The domain share is set to "Prefer cache", not sure what that means for a vm img of, say, 32 GB, will it then always reside on cache SSD/NVMe ? And again: Isn't each write within the VM routed to cache either way? Why would it perform better for the entire image to be stored there? Only at start/boot and at shutdown that would save you a second or something, not?
  12. A gzip tar is corrupted. I tried finding out which one it is, tried downloading a fresh copy of the flash unRAID*.zip from your link on aws but the download keeps failing, stalling at around 127 MB. Any insight on this error?
  13. Somewhere in the forums I read that I 'should' use /mnt/cache/domains/debian/vdisk1.img rather than /mnt/user/domains/debian/vdisk1.img but why ? The total cache storage size is filling up. If I set it to read from cache, what exactly does it do different? Whenever something gets written from within the VM to its disk, doesn't it use cache either way? <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/debian/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk>
  14. Same here, UPS and a very reliable 220V mains network too. Barely ever any outages that were not planned or announced beforehand. There are things you can set in /etc/sysctl.conf to make more use of RAM, but I did that already, and even then it still uses only around 12 GB max. This is a linux issue in general, I've noticed. The distros all claim they cache and use tmpfs, all the while in reality you barely see the RAM having much I/O over time. As soon as you manually create and mount a RAMdisk, it suddenly changes. So I'm guessing that's also the case with slackware and unraid; You have to create a ramdisk with a static size and name, then have unraid use that as its preferred cache storage medium.
  15. My home network mostly runs on 10Gbit ethernet cables, of which the unraid is even connected to 2 NICs using aggregation of the 2. My unraid server has 32GB of DDR4 in it, of which it barely ever uses more than a third. Which surprised me, but that's another topic. So; I'd prefer my RAM to be always used to its max, and therefore I'd like to either have a RAM-disk set to (for example) 20GB of the free RAM left at all times, or have the OS arrange that maxing it out automatically for me. I'd say having RAM be used for I/O of data-streams is highly preferred over using the SSD storage, while unraid's slackware does not seem to do much caching in RAM. I've been monitoring the server's RAM usage with netdata over a month.