Jump to content

tjb_altf4

Members
  • Posts

    1,399
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by tjb_altf4

  1. A general question about the filesystem swap sits on, as I need to enable this functionality very soon... as pools aren't useable for this, and I will probably add a dedicated drive for swap, is there an optimal filesystem to use for swap ?
  2. If you know your way around creating your own docker templates, you should be able to reinstate AFP functionality though docker if you really need to, e.g. https://hub.docker.com/r/cptactionhank/netatalk
  3. I've noticed it is at its worst when catching up to the blockchain, I cap my farmer container resources to 2cpu (each fork gets its own pair) and 4GB ram. That seems to keep everything under control for me.
  4. @Squid is it possible to make the authoring mode work the same as VM XML mode ? I think that would solve this issue and probably bring some quality of life improvements for template authors in general, especially with private repos going away in 6.10.
  5. This has been irritated the hell out of me for the last few weeks and has caused me to chase my tail repeatedly as I have multiple containers I built based on the same original template that originally came from CA. The problem is it constantly overwrites my metadata (links, icons etc) and adds paths and keys from the original template (that have been removed or renamed) that in my case is causing stability and security issues due to the misconfiguration. I'd be super thankful if you could implement a gui way to disable template updates, instead of needing to poke around the backend configuration files.
  6. Looks like this feature has been added to 6.10-rc2, if you have time, upgrade and give some feedback on if this is working for you or not. Based on @SpaceInvaderOne's new 6.10-rc2 overview video, you need to manually edit the XML for the arm cpu type to get up and running
  7. happens to me in chrome all the time, currently on 6.9.2 and saw it this morning lol I see it regularly, I think it is caused by the browser putting tabs "to sleep" when not in use.
  8. haha yeah, next design will hide the cables all things going to plan The array is the first row of disks, averaging 28.5C right now. Second row are mostly raid0 pools, they have been under sustained R/W for a few hours and currently average between 33-36C. The 140mm fans are running at just under 1000rpm, so the setup is quiet enough to sit behind my desk and not bother me (I work from home). note: ambient temp is 20C
  9. Look in the docker.js referenced by the docker page's source, you'll see the updateAll function with what you need.
  10. BTRFS pools will default to RAID1, after they are created you can rebalance via the GUI to various BTRFS raid levels. This is the current options in 6.9.2 for a 4 disk BTRFS pool: and for a 2 disk
  11. Passed 200TB recently with this spaghetti monster, only utilising 24 of 28 HDD slots so far.
  12. New disks are setup into pools now and ready for use, which takes total storage in the server to just over 200TB
  13. Your local syslog server is still enabled
  14. This works and I do this for chia currently, but there is a massive difference in time to expand and balance a pool (days) vs non-parity protected array (minutes). I've also noticed sneaker nets becoming highly popular for chia plotting (and would apply for media ingestion also), and Unraid arrays facilitate these very nicely. Another point is you don't want pools getting to big, borking themselves and nuking your time and effort, so you end up running a tonne of smaller pools... it's a PITA.
  15. The current default settings of the tuning plugin disable the mover's schedule. Uninstall, or seek assistance on the configuration, in the mover tuning plugin thread (as Squid suggests).
  16. Not done this myself, but I'm pretty sure you need to have the template in a subfolder, the name of that subfolder becomes the header in the dropdown (like default, user etc). Also be aware the feature is being removed in 6.10
  17. In good news, the wipefs command logs what it wipes, so I was able to restore the erased filesystem and I have that drive up and running and will be migrating that data to its new pool later today. note: the reboot in between committing the pool changes (via array start) seems to have also caused this issue
  18. Recycle bin (for SMB shares). Great fail safe for accidental deletions on network shares
  19. iSCSI plugin for Unraid Server, which allows you to connect to storage on your NAS like it was a local hard drive. A still a lot of people don't seem to know this plugin exists and I still see iSCSI asked for in feature requests frequently.
  20. Just noticed the GUI is already updated with correct capacity (balance will take a while still), happy days! Thanks @JorgeB
  21. They were added on the previous boot, but array wasn't started (related to the other bug report I guess). Still, this log entry seems to know it should have been 4 devices, maybe some weirdness due to reboot and possible drive allocation changes? Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool TotDevices: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumDevices: 4 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumFound: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMissing: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMisplaced: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumExtra: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool LuksState: 0 Thanks for the suggestion, I'll do that as soon as I have the chance.
  22. Attached Note I went to balance again after the original balance finished, but then cancelled that operation when I realised that was going to be a long one, so there will be a cancelled balance operation in there. fortytwo-diagnostics-20211012-1615.zip
  23. GUI shows incorrect capacity also. I would expect this to read 72TB, 35.1TB, ~36TB
  24. I started with a BTRFS pool of 2x 18TB in raid0, this has been running for a few months. I added another 2x 18TB to the pool a few days ago, this operation completed this afternoon. Now that it has finished, I note that it does not show the expected capacity (72TB): Data, RAID0: total=31.88TiB, used=31.87TiB System, RAID1: total=32.00MiB, used=2.20MiB Metadata, RAID1: total=33.00GiB, used=32.57GiB GlobalReserve, single: total=512.00MiB, used=0.00B No balance found on '/mnt/chiapool' Normally I would run balance a second time before posting, but this looks like it will take another few days... I don't want to waste time and energy if this is a known bug with a possible work around. Tagging @JorgeB because you seem to be across these BTRFS issues. Thanks in advance
×
×
  • Create New...