Jump to content
We're Hiring! Full Stack Developer ×

tjb_altf4

Members
  • Posts

    1,401
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by tjb_altf4

  1. Might be some suggestions in the attached that help, for me it manifests as shfs @ 100% when array is running, but usb kworkers are the culprit when stopped. Possibly moving usb connections around to different (ports) controllers helped.
  2. When creating a new pool with the name 'system', I get an error telling me I can't name a pool the same a share. This is curious as there doesn't seem to be any possible collisions, you would just get something like /mnt/system/system/ where there is commonality. Strangely, I have created pools in the past that share a shares name, but these were not what appears to be a reserved name. Is this a bug, or just a poorly described error (reserved names?) Secondly, if I remove the existing 'system' share, will I be able to create this named pool ? Or is there some hard coded values that will prevent me from using this naming ?
  3. A general question about the filesystem swap sits on, as I need to enable this functionality very soon... as pools aren't useable for this, and I will probably add a dedicated drive for swap, is there an optimal filesystem to use for swap ?
  4. If you know your way around creating your own docker templates, you should be able to reinstate AFP functionality though docker if you really need to, e.g. https://hub.docker.com/r/cptactionhank/netatalk
  5. I've noticed it is at its worst when catching up to the blockchain, I cap my farmer container resources to 2cpu (each fork gets its own pair) and 4GB ram. That seems to keep everything under control for me.
  6. @Squid is it possible to make the authoring mode work the same as VM XML mode ? I think that would solve this issue and probably bring some quality of life improvements for template authors in general, especially with private repos going away in 6.10.
  7. This has been irritated the hell out of me for the last few weeks and has caused me to chase my tail repeatedly as I have multiple containers I built based on the same original template that originally came from CA. The problem is it constantly overwrites my metadata (links, icons etc) and adds paths and keys from the original template (that have been removed or renamed) that in my case is causing stability and security issues due to the misconfiguration. I'd be super thankful if you could implement a gui way to disable template updates, instead of needing to poke around the backend configuration files.
  8. Looks like this feature has been added to 6.10-rc2, if you have time, upgrade and give some feedback on if this is working for you or not. Based on @SpaceInvaderOne's new 6.10-rc2 overview video, you need to manually edit the XML for the arm cpu type to get up and running
  9. happens to me in chrome all the time, currently on 6.9.2 and saw it this morning lol I see it regularly, I think it is caused by the browser putting tabs "to sleep" when not in use.
  10. haha yeah, next design will hide the cables all things going to plan The array is the first row of disks, averaging 28.5C right now. Second row are mostly raid0 pools, they have been under sustained R/W for a few hours and currently average between 33-36C. The 140mm fans are running at just under 1000rpm, so the setup is quiet enough to sit behind my desk and not bother me (I work from home). note: ambient temp is 20C
  11. Look in the docker.js referenced by the docker page's source, you'll see the updateAll function with what you need.
  12. BTRFS pools will default to RAID1, after they are created you can rebalance via the GUI to various BTRFS raid levels. This is the current options in 6.9.2 for a 4 disk BTRFS pool: and for a 2 disk
  13. Passed 200TB recently with this spaghetti monster, only utilising 24 of 28 HDD slots so far.
  14. New disks are setup into pools now and ready for use, which takes total storage in the server to just over 200TB
  15. Your local syslog server is still enabled
  16. This works and I do this for chia currently, but there is a massive difference in time to expand and balance a pool (days) vs non-parity protected array (minutes). I've also noticed sneaker nets becoming highly popular for chia plotting (and would apply for media ingestion also), and Unraid arrays facilitate these very nicely. Another point is you don't want pools getting to big, borking themselves and nuking your time and effort, so you end up running a tonne of smaller pools... it's a PITA.
  17. The current default settings of the tuning plugin disable the mover's schedule. Uninstall, or seek assistance on the configuration, in the mover tuning plugin thread (as Squid suggests).
  18. Not done this myself, but I'm pretty sure you need to have the template in a subfolder, the name of that subfolder becomes the header in the dropdown (like default, user etc). Also be aware the feature is being removed in 6.10
  19. In good news, the wipefs command logs what it wipes, so I was able to restore the erased filesystem and I have that drive up and running and will be migrating that data to its new pool later today. note: the reboot in between committing the pool changes (via array start) seems to have also caused this issue
  20. Recycle bin (for SMB shares). Great fail safe for accidental deletions on network shares
  21. iSCSI plugin for Unraid Server, which allows you to connect to storage on your NAS like it was a local hard drive. A still a lot of people don't seem to know this plugin exists and I still see iSCSI asked for in feature requests frequently.
  22. Just noticed the GUI is already updated with correct capacity (balance will take a while still), happy days! Thanks @JorgeB
  23. They were added on the previous boot, but array wasn't started (related to the other bug report I guess). Still, this log entry seems to know it should have been 4 devices, maybe some weirdness due to reboot and possible drive allocation changes? Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool TotDevices: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumDevices: 4 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumFound: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMissing: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMisplaced: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumExtra: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool LuksState: 0 Thanks for the suggestion, I'll do that as soon as I have the chance.
  24. Attached Note I went to balance again after the original balance finished, but then cancelled that operation when I realised that was going to be a long one, so there will be a cancelled balance operation in there. fortytwo-diagnostics-20211012-1615.zip
×
×
  • Create New...