• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

-Daedalus's Achievements


Contributor (5/14)



  1. I'm not running 6.10, but the post above should illustrate why iowait shouldn't be included in the system load on the dashboard. Everyone equates those bars with CPU load, because that's how they're portrayed. If it has to be there, maybe break it out into "Disk I/O" or something instead.
  2. Figures. I get off my ass after years of not reporting it... and it's fixed already. New model sounds great! And from what little I saw on Twitter looks pretty spiffy as well.
  3. This one has been around for a while (possibly longer than 6.7, I just think that was when I first noticed it) so it may have been reported already. I leave the dashboard open a lot of the time just for general monitoring. When navigating to another page after it's been left idle for some time, there will be a large spike in CPU usage (on the client machine), and a big drop in memory usage. I haven't charted this, but the affect is noticeable after about a an hour; on moving to any other page, there will be a delay of maybe a second or so. After a couple of hours, it's several seconds, and several hundred megs of RAM reclaimed. I'm not sure exactly when it happens - maybe over night - but Chrome will eventually "Aw, Snap!" and crash. Certainly not the highest of priorities I know, but I figured I'd mention this as given one of the things we'll be getting in 6.10. I wouldn't want this to detract from an otherwise sanky new look. 😎
  4. Nope. You can add any size disks you want to a pool, and unRAID will figure it out. I personally ran a pool of 1TB + 2x500GB disks no problem.
  5. Only thing I spot is the second USB drive. unRAID only boots off one, and doesn't support mirroring or redundancy there. You could of course script a backup from USB1 to USB2, but you'd still have to pair the key to the new USB on next boot. Otherwise, as Spencer said: Looks solid.
  6. +1 I'm doing this manually at the moment, but it would be nice to have it built-in as a mover option.
  7. +1 This makes absolute sense, and I agree with the philosophy that VMs and containers should have the same details and behaviours wherever possible.
  8. I like this topic. I've edited your post a little to include some things. I for one would like to know where VM XML lives for example, and if the only things needed to restore VMs are the vDisks and XML.
  9. Yes, please! It would mean not having to jump to XML for a virtualized ESXi install.
  10. So I just checked this after updating to 6.9.1 Two 9207-8i's on P16 firmware. All my SSDs TRIM fine, except for my 850 Evo. Something something zeros after discard? I remember reading this as a reason for issues with Samsung drives, can't find specifics at the moment though. Moved it to one of the motherboard ports and it TRIMs fine again. Haven't tried with P20 fw as of yet.
  11. I figured I should post on AMP about this, except for the big warning on your Git page. I'll go do that, see what they come back with.
  12. Second issue for you Cornelious: I really love the idea of sticky backups. To test this, I created a trigger to run a backup every 10 minutes from within the Minecraft instance under "Schedule". I limited the backups to 12 under "Configuration > Backup limits" (the size options were left extremely large so as not to be a factor) I let the game run to create a couple of automatic backups I created a manual backup called "STARRED" with a description, and Sticky = On. Any ideas as to why my sticky backup is still getting deleted? I would think it should keep it at the bottom, and cycle through the 11 remaining backups every 10 minutes. Might it have something to do with some backups being created by "admin" and some by "SYSTEM"?
  13. That was it! Thank you, it wasn't immediately obvious that "instance" and "application" are treated differently.
  14. Hi! I'm using your container (thank you!) pretty successfully, except for one weird issue: The main instances page doesn't seem to be able to send start/stop commands to the individual instancs. (other commands seem to work though: I can change port bindings, for example) I had a Minecraft instance set to autostart after 10 seconds. This status is reflected in the main menu, but if I actually open the instance, it's still off. Likewise if I manually start/stop from the main page - when I manage the instance to check, its status hasn't changed. I don't see anything much in the log, at least no obvious failures: I have this setup on a custom network, with a static IP, if that makes any difference. (networking is not my strong suit). Anything else I can check, or any ideas on where to look?