-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. Just an FYI: The plugin "Server layout" does much of what you want (though I agree, something like this would be great to have natively). You can pick in what orientation your drives show up (columns and rows), then assign a drive to each slot. It will give you all the information on the drive (serial number, letter, firmware version, etc.), and there's also a "Notes" section for anything custom you might want to include.
  2. Ok, since this hasn't really gotten any feedback, let me distil it down to my most important question at the moment: Can I change my passphrase/keyfile (or switch from one to the other) later on, without having to reformat any disks?
  3. It's been a while since I've tested this, so by all means wait for someone else to chime in, but from what I recall, unRAID won't auto-add a drive to the array, even if it's the same one that was just removed. So to resolve this, you can stop the array, assign the drive back to the array, and let it rebuild. Then you can stop the array again, increase the number of array drive slots by one, and add your new drive.
  4. So, I recently picked up a few 10TB drives, so I'm expanding my array quite significantly for the first time in a while. This seems like a good time to begin adding encrypted drives to my array, if I decide to do such a thing. I'd love to hear feedback from people who have used it. Are there any got'chas? Is there any reason not to do this? I'm not sure exactly what I'll use as a key - whether it's a file on my local network, or just a passphrase (can this be changed after the fact without having to wipe drives? - but I'd like to hear some general opinions really. Does it present any problems for accessing a single drive physically connected to a Windows machine, for example? Is there a performance penalty? Should this be done to cache pool as well? Any other pain points that people didn't expect? Thanks in advance!
  5. Why was this, out of interest? I'd ideally like to put output from this on /system/logs, but instead the logs will be on the root of the /system share, which isn't wonderful from an organisational standpoint. I imagine I'm not the only one who would do this. It also seems a little much to have to great a logs share solely for this. nit-picking I'll grant you, but I figure this is the time for it, seeing as it's newly-added.
  6. I'm currently playing around with HA clusters. I also have some VMs set up for game servers that haven't been used in a month or more. End result is I have a bunch of VMs that don't get touched too often, and remain offline. How about the ability to move a VM's VDs to array (some new share, or user0/Domains, don't know if that would cause conflicts/issues under the hood) storage? The VM would show as offline on the Dashboard, with an indicator of some sort to say its VDs are no longer on the cache. On startup, if there is enough space on the cache, the VD is moved back to cache, then the VM started. If not enough space, a warning is presented, and the user is asked if they wish to start it anyway. This way users get more space for regular share transfers to be accelerated (my 1TB pool has about 150GB of seldom-used VDs that could be reclaimed). Automation of this process (if VM isn't powered on for 'x' days, archive) would be cool, but I'd be perfectly happy with a manual 'Archive' button on the Dashboard/VM Manager.
  7. Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
  8. I was just thinking of this the other day! Would be very handy to have alright. +1
  9. Agreed. Thanks very much for this one bonienl. Much appreciated.
  10. +1 from me. How many people have we had reporting lock-ups, restarts, crashes, etc. and can't grab diags. I'm surprised this hasn't been implemented before now, for your own sanity, to be honest.
  11. (I'm not trying to be awkward here, I swear, but:) wouldn't it make more sense, given the new theme, to change the colour to orange, or similar? Seems that's the accent colour you're going for. I think the complaint wasn't that it wasn't blue, but simply that it wasn't a different colour to the "off" state.
  12. Would a kind soul care to post a screenshot of the new dashboard for those of us not running RC versions? Edit: Never mind, someone on Reddit posted some. Looks really nice! Off the bat, the only thing that sticks out at me is that the tile on the top left - server description - seemed to be taking up a lot of space, and the information is largely duplicated on the right side of the banner. Might be worth thinking about condensing/removing some of this to free up space for the main interest items.
  13. +1 I've love something like this as well. To expand on this: It would be great to be able to setup sync options as well, something like: Pick your share (specific, all) Pick your sync type (one-way backup, bidirectional sync) Pick your schedule (day, month, etc.)
  14. Very nice! Off-topic, but is there a possibility of getting those pop-ups to match dark/light themes? Always seemed a bit jarring to me to have blazing white pop-up boxes on a black theme. I don't suppose there are global pop-up-bg/pop-up-text colour variables you can change?
  15. Before it gets buried too much: I saw in the 6.6.0 thread, some people needed to delete dynamix.plg to get the webUI showing properly. Is there anything specific that's been noted previously we should watch out for with this update?
  16. Can I ask what controller you're using? 300GB shouldn't take days. It should be done in an hour, and that's at only about 60MB/s, which you should be able to far exceed. Also, if you have a bunch of other stuff writing directly to the array at the same time, this can cause disk thrashing on the parity drive(s), which can slow things down quite a bit.
  17. Valid point. I didn't think about the parity disk here. If you end up in a situation where you're writing to more than one disk, the parity disk will be having a hard time of it. Wouldn't be so much concerned with multiple things happening on the cache, given it'll outpace the disks by miles, but the parity disk thrashing is a valid point, and probably renders it moot, unless you were to limit the move to sequentially moving files as they come in.
  18. I'll be completely honest, and say that I know know the low-level stuff with the mover; I don't know what functions it calls, how much overhead is involved, etc. I was only asking to do it on a per-file basis, as soon as copied, because I imagined that would be less expensive than basically having the mover continually called just in case something new was added to the cache.
  19. I'm aware, however it's not quite the same thing. You could set the mover to run at 10%+ utilisation, and if your cache drive is always above this, it'll do what I want, but I would imagine it's a more elegant solution to check if new files or added, and move only them.
  20. I know some of this can be handled with user scripts, but I'd like to see more native settings for the Mover. The main one I'm thinking of is treating the cache drive more like a cache on a RAID controller - Once a file gets written to the cache, it is immediately moved to the array. Useful for those who have small SSDs, or for those only running a single SSD rather than a pool, and don't want to leave important files on the cache for hours unprotected, or for those who simply want the write performance increase, and aren't worried about power consumption/noise from all the array disks not spinning down. (Ideally, I'd love to have the cache act like this, on an unassigned device, and leave VMs and Docker on the pool)
  21. A great idea. I've been feeling like my dashboard has been getting a little cluttered myself recently. Maybe something like what Google Images does when you click an image? I would also like something like this for VMs too: Hyper-V Cluster ESXi Cluster Game Servers Development Machines Great idea!
  22. Silly one, but since you quote 50% and 25% reductions in benchmark scores, have you looked to see what CCXs your cores are mapped to? If they're on one of the secondary CCXs (that don't have direct access to the IMC) then that might be part of the issue. (Unless of course you've got much better scores changing nothing but the unRAID version back to 653. If that's the case, then feel free to ignore the above)
  23. Dell's PERCs have a user-defined 'rebuild rate' (default of 30%), which as far as I understands is a QoS system for the I/O to the disks. I've no clue if something like this is possible with unRAID, but it would be functionally pretty similar to what you're suggesting: If other stuff is happening, prioritise that, rather than the parity check.