• Posts

  • Joined

  • Last visited

Everything posted by -Daedalus

  1. I'm not running 6.10, but the post above should illustrate why iowait shouldn't be included in the system load on the dashboard. Everyone equates those bars with CPU load, because that's how they're portrayed. If it has to be there, maybe break it out into "Disk I/O" or something instead.
  2. Figures. I get off my ass after years of not reporting it... and it's fixed already. New model sounds great! And from what little I saw on Twitter looks pretty spiffy as well.
  3. This one has been around for a while (possibly longer than 6.7, I just think that was when I first noticed it) so it may have been reported already. I leave the dashboard open a lot of the time just for general monitoring. When navigating to another page after it's been left idle for some time, there will be a large spike in CPU usage (on the client machine), and a big drop in memory usage. I haven't charted this, but the affect is noticeable after about a an hour; on moving to any other page, there will be a delay of maybe a second or so. After a couple of hours, it's several seconds, and several hundred megs of RAM reclaimed. I'm not sure exactly when it happens - maybe over night - but Chrome will eventually "Aw, Snap!" and crash. Certainly not the highest of priorities I know, but I figured I'd mention this as given one of the things we'll be getting in 6.10. I wouldn't want this to detract from an otherwise sanky new look. 😎
  4. Nope. You can add any size disks you want to a pool, and unRAID will figure it out. I personally ran a pool of 1TB + 2x500GB disks no problem.
  5. Only thing I spot is the second USB drive. unRAID only boots off one, and doesn't support mirroring or redundancy there. You could of course script a backup from USB1 to USB2, but you'd still have to pair the key to the new USB on next boot. Otherwise, as Spencer said: Looks solid.
  6. +1 I'm doing this manually at the moment, but it would be nice to have it built-in as a mover option.
  7. +1 This makes absolute sense, and I agree with the philosophy that VMs and containers should have the same details and behaviours wherever possible.
  8. I like this topic. I've edited your post a little to include some things. I for one would like to know where VM XML lives for example, and if the only things needed to restore VMs are the vDisks and XML.
  9. Yes, please! It would mean not having to jump to XML for a virtualized ESXi install.
  10. So I just checked this after updating to 6.9.1 Two 9207-8i's on P16 firmware. All my SSDs TRIM fine, except for my 850 Evo. Something something zeros after discard? I remember reading this as a reason for issues with Samsung drives, can't find specifics at the moment though. Moved it to one of the motherboard ports and it TRIMs fine again. Haven't tried with P20 fw as of yet.
  11. I figured I should post on AMP about this, except for the big warning on your Git page. I'll go do that, see what they come back with.
  12. Second issue for you Cornelious: I really love the idea of sticky backups. To test this, I created a trigger to run a backup every 10 minutes from within the Minecraft instance under "Schedule". I limited the backups to 12 under "Configuration > Backup limits" (the size options were left extremely large so as not to be a factor) I let the game run to create a couple of automatic backups I created a manual backup called "STARRED" with a description, and Sticky = On. Any ideas as to why my sticky backup is still getting deleted? I would think it should keep it at the bottom, and cycle through the 11 remaining backups every 10 minutes. Might it have something to do with some backups being created by "admin" and some by "SYSTEM"?
  13. That was it! Thank you, it wasn't immediately obvious that "instance" and "application" are treated differently.
  14. Hi! I'm using your container (thank you!) pretty successfully, except for one weird issue: The main instances page doesn't seem to be able to send start/stop commands to the individual instancs. (other commands seem to work though: I can change port bindings, for example) I had a Minecraft instance set to autostart after 10 seconds. This status is reflected in the main menu, but if I actually open the instance, it's still off. Likewise if I manually start/stop from the main page - when I manage the instance to check, its status hasn't changed. I don't see anything much in the log, at least no obvious failures: I have this setup on a custom network, with a static IP, if that makes any difference. (networking is not my strong suit). Anything else I can check, or any ideas on where to look?
  15. I wouldn't mind, but I just gave a presentation last week on ZFS. You'd think I've have remembered that file level and block level are different things. I blame the beer. Thanks guys, blonde moment of the day. Hopefully the only one.
  16. Good to know. I assumed drive with no data = zeroed drive, therefore wouldn't affect parity. The wiki mentions running a clear-me script, but it doesn't mention doing anything special with the drives. I assume it adds a flag in the drive header or something as well?
  17. Hi all, Sanity check here. I removed two drives that had been cleared of data (ls -la showed 0 on each drive). I stopped the array, new config, kept parity and cache slots. Assigned everything back, removing drives 6 and 7, moving everything after slot 5 up two slots. Enter encryption passphrase, tick parity is valid, start array. All came back fine, all data is there. Run parity check, and lots of errors, as if it's recomputing everything. My understanding was if there was no data on the drives, parity shouldn't have to be rebuilt. At this point if I have to rebuild parity that's fine - not the end of the world - I just want to make sure I haven't made a massive oopsie. Thanks! server-diagnostics-20210123-1343.zip
  18. +1 If it would cause too much problems, maybe each disk could be assigned an alias, keeping the original /mnt/disk mountpoint, and giving an option in the UI for displaying mountpoint name or alias.
  19. As to your first problem, this is a known issue with RC2. Manually spin up your drives (even if they're already spun up, from what I understand) by clicking the little status LED symbol on the left. The temperatures should display correctly after this.
  20. I completely forgot you can do this through the UI now. Ignore my first post.
  21. Just to be clear here: Pinned != isolated. They're different things. Pinned just means a CPU core that a VM can use, but anything else can also use that core as well if it's free. This is done by pinning the CPU core in the GUI. Isolating a CPU core means unRAID doesn't touch it. This is done by appending "isolcpus="x,y,z" to your syslinux config on Flash > Main. If you want to fully isolate the cores so that only the VM uses them, then you'll need to change your syslinux config from this: To this (for example):
  22. Valid points all. I hadn't considered the fact that no-one has written a plug-in for it yet. That likely does say something to all this. And you're right; I hadn't considered that some people use disk shares/mappings either. I guess we're at the same point: Feature request made, wait and see.
  23. I have to disagree. unRAID is being billed as an appliance that gives you enterprise features for less money, and does lots of things without requiring as much user knowledge as a home-spun setup using a more common distro. If you're saying a regular user should be totally ok with using the terminal zero a drive with dd, then you're kind of missing the point. I could actually see this saving time, in the sense that a user kicks off the drain operation, then comes back a day or two later, can restart the array at a more convenient time, and yank the drive. How I'd see it happening is more like: User selects drive to drain Data is evacuated to other drives Drive is zero'd out System leaves pending operation to remove drive from config on next array stop Notification is left for user of pending array operations User stops array Config is changed such that drive is removed User starts array So this has gone from requiring several integrations in the UI, potentially installing a new plugin, as well as terminal commands (that have to be left running for ages, mind), to 2 clicks on a default unRAID install. Look, at the end of the day Linux gurus are always going to scoff at the noobs, but if it's a storage appliance first, and one that sells itself on flexibility at that (different size/types of drives, etc.) then removing a drive is what I would call a fundamental - and likely expected from a new user's point of view - operation.
  24. I'm not ephigenie, but I liked the idea, so I'll give my two cents on it, for what it's worth: Maybe something like a broad strokes roadmap. Nothing too concrete, but even a post with headings like: Features being worked on for the next major/minor release (multiple cache pools) Bugs being squashed for the next major/minor release (SSD write amplification) Future possibilities currently under investigation (ZFS) You could make the forum visible to community developers only. Or if you're feeling particularly transparent, forum members with enough posts that they've at least been semi-active, and around for a while. The understanding would be that anything mentioned is subject to change or reprioritisation, and that obviously certain things can't be talked about for market/competitive reasons or whatever. (or just because you don't like spilling all the beans on shiny new things) This would allow you to gauge community interest (at least, the portion of the community active on the forums) around given features which might factor into prioritization. As well, it gives us members a peak at the direction unRAID is heading, and an appreciation for why so-and-so wasn't added to the latest patch, or why such-and-such a bug is still around.