Jump to content

tjb_altf4

Members
  • Posts

    1,399
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by tjb_altf4

  1. Just upgraded to 6.5.55 without issue, this release has an additional patch for the log4j vulnerability https://community.ui.com/releases/UniFi-Network-Application-6-5-55/48c64137-4a4a-41f7-b7e4-3bee505ae16e
  2. If bifurcation is available, it would only work on slot 1 as it needs a slot with x16 lanes. Its also possible if you did this, using slot 3 might drop slot 1 down to x8 lanes meaning only half the M.2s would work (although this manual snippet doesn't indicate that). If you got it all working, the M.2s would be presented as individual devices, so you could allocate them in Unraid however you wish.
  3. There were a few issues that amplified writes, one of those was dockers like Plex doing health checks (which created small changes frequently in the docker.img) The fix was to add the following to extra parameters of the offending docker under Edit Container > Advanced View --no-healthcheck
  4. For the config (only), just copy paste the config using XML mode, it is a manual method, but not that arduous to do.
  5. If you can't create/change the entry point of the docker to do this (probably a no go), I'd look at the docker folder plugin, this was originally meant to group dockers, but it also gives you the ability to define custom commands among other functionality. Otherwise you could put in a feature request to have custom commands added to the vanilla Unraid UI.
  6. are you trying to run a specific script from the context menu?
  7. As @trurl said, it already does this
  8. On current version it's under Settings > Management Access > Local TLD If your sig is current and you're on 6.7.2, I think this might be under Settings > Identification
  9. Yes same for me, whatever name:tag you build locally using, you specify that in the template. I usually build with a tagged version and latest simultaneously for simplicity.
  10. You would have to copy (manually or otherwise) your template xml to: /boot/config/plugins/dockerMan/templates-user/ That's how I'm doing it at the moment.
  11. This might save your bacon from future OOM events
  12. I noticed in the libvirt log that it is repeatedly trying to open remote connections from your other server, this only happens in the 6.10rc2 logs. This might not resolve your issue, but worth removing these isos from the VM to rule it out in any case. 2021-11-03 10:36:32.851+0000: 5854: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 10:36:32.851+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 10:36:34.697+0000: 5855: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 10:36:34.697+0000: 5854: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 10:36:34.714+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 10:36:34.715+0000: 5857: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 11:40:38.970+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 11:40:38.971+0000: 5857: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 11:40:39.724+0000: 5856: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 11:40:39.725+0000: 5855: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 11:40:39.742+0000: 5854: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 11:40:39.743+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 20:29:30.344+0000: 5857: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 20:29:30.345+0000: 5856: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 20:29:30.991+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 20:29:30.991+0000: 5857: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 20:29:31.009+0000: 5856: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 20:29:31.010+0000: 5855: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 20:29:35.271+0000: 5854: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 20:29:35.272+0000: 5858: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory 2021-11-03 20:29:35.288+0000: 5857: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/AOMEI_Windows.iso': No such file or directory 2021-11-03 20:29:35.289+0000: 5856: error : virQEMUFileOpenAs:11262 : Failed to open file '/mnt/remotes/BABYBUS-GREEN_isos/template/iso/virtio-win-0.1.189-1.iso': No such file or directory
  13. Its part of stock Unraid (unless I'm mistaken), go to: Settings > Display Settings > Show array utilization indicator > Enabled Don't forget to press apply.
  14. Just used this to guide to take advantage of the Cyber monday sale, to upgrade a system that's currently in pieces Very concise guide, and the extra troubleshooting steps were a great help! Great work by the IBRACorp team.
  15. Would be great in a future revision to see a basic text editor as an action, and/or an "open command line here" to allow more complex operations in web terminal. Otherwise very much looking forward to this plugin
  16. For new Maize farmers, there's a promo to receive 30 XMZ. Jump on their discord for details if it's of interest.
  17. Not sure what the problem was (PEBKAC probably). I had problems initially when creating it as a subfolder of my pool, so I thought I could only create it at the root of the pool, thats where it couldn't set NOCOW. I later created it as a subfolder again and is now working as expected.
  18. I was wondering why the swap wasn't being used, seems the BTRFS file needs to have NOCOW attribute set. This probably isn't an issue for many as they use existing share that happens to have NOCOW attribute set (and is then inherited), but might be worth explicitly setting for other cases. It looks like NOCOW needs to be set on subvolume, before swapfile itself is added. ( i.e. chattr +C swapfile )
  19. Best prices on WD 18TB I've seen in 12 months on AU store
  20. Doing a bit a maintenance for the last few days and I think I'm done. My cache is now utilising the evo plus nvme drives I installed earlier in the year, and so far one of the old 960 pros being utilised for a system pool. I managed to do this upgrade as a live migration, but it is not for the faint hearted! You end up in a quasi-land working half in unraid gui, half in cmd line with btrfs directly... not ideal. The system pool that was created is so far is being utilised for the old system share, so docker.img and libvirt.img, but I've also installed the swapfile plugin, so now I have a fast swap available. Why do I need a swapfile? Well running lots of chia forks, transcoding, media apps and vms simultaneously has caught me out with OOM from time to time when usage spikes, so I've added a safety net above the 64GB of RAM I have installed which so far is working well, particularly with the chia fork farming. System seems much snappier now, particularly on the dockers page when starting and stopping containers, so the cumulation of changes has made a positive impact which is a great outcome.
  21. Renaming 'system' share to something else allowed me to create the 'system' pool. From there I could migrate the files in the existing system share to system pool, and delete the original 'system' share. This wasn't really a bug, but just an annoying series of steps and my lack of understanding on collisions this warning was protecting me from. I believe @trurl was spot on with the assessment of it being an SMB export conflict IF the pool was used as a disk share. Thanks.
  22. Emby updated overnight, but I had to roll back to 4.6.4.0-3-01 this morning as some (but not all) of my client devices couldn't connect. I believe this is an emby issue, but I thought I'd give people a heads up incase they face similar issues
  23. Might be some suggestions in the attached that help, for me it manifests as shfs @ 100% when array is running, but usb kworkers are the culprit when stopped. Possibly moving usb connections around to different (ports) controllers helped.
  24. When creating a new pool with the name 'system', I get an error telling me I can't name a pool the same a share. This is curious as there doesn't seem to be any possible collisions, you would just get something like /mnt/system/system/ where there is commonality. Strangely, I have created pools in the past that share a shares name, but these were not what appears to be a reserved name. Is this a bug, or just a poorly described error (reserved names?) Secondly, if I remove the existing 'system' share, will I be able to create this named pool ? Or is there some hard coded values that will prevent me from using this naming ?
×
×
  • Create New...