-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. I just picked up a 2U case for a second server the other day, and I was looking to make a custom image for it. It's very simple: It's a mix of the 2u-hdds and 2u-vents. I went looking for the images so I could copypasta something together, but... I couldn't find them. Where do they actually live?
  2. One interesting quirk I've noticed here (also running BTRFS cache pool, encrypted, on 6.8.3): I leave "iotop -ao" open, and after a minute or so, I have maybe 30MB or so written. I stop a Docker container, and I have 120-150MB written. I start it, and it jumps another 100-150MB. I start and stop a VM, and this doesn't happen. I've no idea if this is expected behaviour, if it means anything, or if it helps at all, but I thought I'd mention it. server-diagnostics-20200416-1134.zip
  3. Now that I understand what's being asked, I'd also like it. It makes a lot of sense. +1
  4. Good shout, I meant to look at this a while back, forgot to. I'll absolutely be using this. I cannot believe I did not think of this. That is enough of a solve for me personally to be happy, but I stand by the feature request for something native in the UI. Thanks to you both!
  5. Just for giggles: Is it worth making a disk share, and backing up to that, just to see if there's a difference?
  6. Something that occurred to me the other day. If an array loses n+1 drives to failure before the array can be rebuilt, then you've lost at least 1 drive worth of data. At the moment, we're trusting unRAID with what to put what file, and that's great, but it could make things very tricky in the event of disaster recovery. Some of my shares are split logically - seasons of a TV show, for example - but even then, that's looking through hundreds of TV shows to see what, if any, seasons are missing. In the case of shares where the split is handled entirely by unRAID, any random assortment of files could be gone. I imagine it's wouldn't be too high on the list of priorities, but if you do happen to need it one day, it would be a life-saver. So, tl;dr - I'd like to request some sort of index of each file, stored on each disk (cache, array, and UD). Bare minimum could be a text file per disk. Ideally, it would be tree-based with some sort of simple UI, and unRAID would check against files actually on the disks, and highlight missing items in the event of data loss.
  7. In theory, absolutely. But I've found that USB sticks, when left in all the time, run extremely hot. I use USB sticks in work for a document backup, writing maybe 20-50MB a day (so significantly more than unRAID, yes) and I've gone through three in the past year. Every time I take one out of the dock, it's too hot to hold for the first couple of seconds. I know it's more USB 3.0, but it's very difficult to find high quality 2.0 drives these days. A relatively small concern, I suppose, but I would be a nice option to have none-the-less.
  8. Perfect! If needed I can raise a specific feature request for it, but I think it would be something useful to have in the help text under vdisk when creating a VM.
  9. Thanks very much for the explanation. I didn't realise unRAID was smart enough to de-reference files like that. I actually just changed all my vdisks paths to /mnt/cache rather than /mnt/user because of some reports of FUSE slowing down certain things. So to be clear: When creating vdisks in a cache-only directory, we can use /mnt/user, and the vdisks will be mounted under /mnt/cache?
  10. Updated to 6.8.1 from 6.7.2 with no problems that I can tell.
  11. Fair enough. I never explicitly type 'exit' anyway, just point it out in case I was an edge case or something.
  12. If I'm remembering correctly, the individual drives on FlexRAID still have a filesystem on them. Can data be accessed on individual drives? If yes, then you can use the Unassigned Devices plugin(1) to mount one drive at a time in unRAID, and transfer the data across that way. If you have enough physical space for the drives, another option might be to install unRAID, assign you new disks to the array, then load your existing machine as a VM. You could pass all your FlexRAID disks to it, get it started, then transfer the data over the network. I think virtio is smart enough to keep the transfer internal to the box too. You absolutely can preclear more than one drive at once. Depending on the number, you might be bottlenecked by your controller/chipset, but so far as I know, there isn't a limit. (1) It's kinda what it sounds like; it lets you make non-unRAID drives accessible on your unRAID box. Very handy if you need to copy data from an NFTS disk from a Windows machine, for example.
  13. I actually came from FlexRAID myself, several years ago at this point. (And am very happy with the decision. unRAID isn't perfect - very little is - but it's vastly superior the FlexRAID.) I can't remember off the top of my head, but I'm pretty sure it allows for drive removal? I know you could expand a drive at a time, so it makes sense you could shrink it a drive at a time as well. If you have enough spare slots in your Shuttle/Netapp box, then you can copy data to unRAID with a starter disk, take that drive out of Flex, add to unRAID, copy the next, remove, add, copy, and so on. If you want to be secure about it, add the parity at the start, if you want it to be faster (it'll be very slow if it has to calculate parity during the move) you can leave adding the parity disk until the end.
  14. I've seen the same thing on 6.7.2. I was hoping it got fixed in 6.8. In my case, I have VMs with vDisks on user shares, and on an unassigned drive. I have VDs manually mapped, and each time I edit a VM from the GUI, the VD location is back on auto. Once I select 'manual' the path populates correctly, but it is a little tiresome to have to do it each time.
  15. As long as there's none anywhere there are pins, you're likely fine. So, if the CPU socket (the inner portion. Stuff on the retention mechanism is fine), and the inside of the PCIE slots are clear then I'd say you're ok.
  16. I'm assuming this is a "No" as you would have looked at the time, but there's not a GPL version anywhere that does similar things? If this is something on the roadmap, it seems like it would be a necessity anyway.
  17. This is something I'd love to see as well. +1 I'm thinking, as you say, similar to ESXi. Graphed history for storage/network/disk/ram usage, per container/VM. Not a small ask, I know, but something unRAID is missing at the moment. Even the System Stats plug-in, being relatively basic, isn't bundled with unRAID stock.
  18. Holy thread necro, Batman! It's been in for a while:
  19. I get the feeling you might be ditching 6.7.2, and moving straight to 6.8, given all the changes you've talked about are for the latter and not the former. Probably not something you like answering, but are we days/weeks/months away from RCs starting for 6.8?
  20. unRAID's best feature (big or small)? The area where unRAID could improve the most? And it goes without saying, thank you for your truly wonderful contributions to the community.
  21. Correct. Basically, never mix user shares and disks. /mnt/cache -> /mnt/disk[x] is fine. /mnt/user/dir1 -> /mnt/user/dir2 is fine. "user" is an abstraction layer used by unRAID between the disks. It includes cache and disks.