Leaderboard

Popular Content

Showing content with the highest reputation since 06/25/21 in Report Comments

  1. Nice one, I was rebuilding my server and decided to update firmware and after amd_iommu=pt no longer worked on my hp microserveer but after an hour of digging I finally caught this reply and iommu=pt did the magic.
    2 points
  2. lol, yeah I tend to do that from time to time. Plus seeing the writes slowly add up and the remaining life slowly tick down on the SSD has been an itch I needed to scratch for some time. Figure I will see what is possible under the best case setup and then back off to comfortable compromise after that. For example the above docker side commands are well worth the effort IMHO as they "just work" once you know what containers need which command and have a big effect on writes. Some of the appdata writes are pretty simple fixes as well such as reducing/disablin
    2 points
  3. Ok, so my latest write report came in and I am quite happy with the results. Before disabling docker logs / extending helthchecks to 1hour on a BTRFS formatted drive with a BTRFS docker image: 75-85gb writes/day Disabling the "low hanging fruit" docker logs on the same btrfs/btrfs setup: 50gb/day Disabling almost all the docker logs and extending healthchcks to 1 hour 39gb/day Have a feeling that would come down a bit more if left for longer, so basically cut the writes in half, which is about right when you consider the next datapoint.
    2 points
  4. It is only one day, it does vary a bit day to day but looks like disabling those logs dropped the writes from 75-85gb/day to 50gb/day. Not bad really. Think I will give it one more day and then disable the healthchecks and see how that does.
    2 points
  5. For Unraid version 6.10 I have replaced the Docker macvlan driver for the Docker ipvlan driver. IPvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network. The end-user doesn't have to do anything special. At startup legacy networks are automatically removed and replaced by the new netwo
    2 points
  6. Further corrections are done on Unraid version 6.10, please test once available.
    1 point
  7. @codefaux hey, thanks. My server has been running for 69 nice days without interruption. thanks for your help
    1 point
  8. @limetech Do you have any feedback, there are users that cannot move off 6.9.1 as their drives no longer spindown > 6.9.1
    1 point
  9. I just ran a test to try to see when it was called and it looks like it will work perfect! I just ran a simple loop to print the date into the syslog and it started basically first thing when I clicked shutdown in the GUI and it waited until it was finished to continue the shutdown procedure! Exactly what I was looking for! Found I can runthis command to shutdown all the docker containers before doing the final rsync so they can close the files properly. docker stop $(docker ps -q) The more I look at this, the more I think it could be
    1 point
  10. Um, wow. So with the latest changes with the appdata ramdisk, my writes for the last day was a mere 8gb?!? Now the rsync cron didn''t work for the first few hours for some reason so changed it to hourly for the last ~13 hours, so the real writes every 2 hours would most likely be a bit more but honestly might just stick with every hour if it is only ~16gb of writes. So to recap: BTRFS image > XFS SSD everything stock = ~25GB/day BTRFS image > BTRFS SSD everything stock = 75-85GB/day Docker Folder > BTRFS
    1 point
  11. Really going down the rabbit hole now, trying to eliminate all unnecessary writes possible lol. I have been tweaking things based on the activity logs, slowly knocking out the big hitters. It has made a BIG difference in the logs themselves. I was racking up ~75mb of activity logs a day at first, I had less then 1mb of activity logs overnight now! So far I have mostly just disabled the docker logging of most of the containers using the above commands. Extended the helthchecks of a few containers to 60 mins but only like 2 problem containers so far. The log i
    1 point
  12. Ok Just putting this hear as notes to myself and anyone else that finds this. So far the commands I am using to reduce writes are first disabling the low level logging from the docker engine. When possible using : --log-driver none This breaks some containers though and also disables the unraid gui logs so using this for the picky containers or ones I want to be able to see the logs for in unraid: --log-driver syslog --log-opt syslog-address=udp://127.0.0.1:541 I used my unraid server ip so that I could spinup a syslog-ng container and easi
    1 point
  13. I explained it earlier but might of not posted the exact command, basically you set it to syslog and then a remote server but with an invalid ip address. Put this in the "Extra Parameters:" section of the docker settings. --log-driver syslog --log-opt syslog-address=udp://192.168.1.100:514 This seems to work to disable the internal logging of the docker without breaking it. The worst offenders so far seem to be the VPN enabled containers and binhex containers for some reason. Healthchecks I am not sure about yet, have not had time to dig into that.
    1 point
  14. This is actually by design with Docker. We've discussed this at length before and your options each with their own caveats are: enable host access to docker networks in the Settings | Docker (Advanced View) my screenshot is disabled since I don't use it Enable VLANs in Settings | Network Settings and add custom networks in Docker settings If you have a spare Network card, you can keep it separate from the default bond between the other network interfaces (usually bond0) and then setup a custom network there for docker It is best to setup Unr
    1 point
  15. It has to be the br0 ones, as I've turned the others off completely. I really don't want to mess around with vlans in my home network, and complicate things further. I'll probably spin up a VM in ESXi for docker for now, and if this isn't fixed in the next few months, I may just end up migrating to a new platform. 6.7 broke things for me, as did 6.8 and 6.8.3, so I came from 6.6.7. I promised myself prior to 6.9.0 if this was another failed upgrade, I'd look into alternatives to unRAID, which really sucks, as I have 2 unraid pro licenses, and have been using unRAID for several years.
    1 point
  16. After building a custom QEMU 5.2 slackpkg and implementing the workaround described above, I was also able to use virtiofs to pass-through directories on my Unraid host to my VMs. However, determining the correct compilation options for QEMU was a time-consuming, iterative process. I reached out to @limetech and they confirmed that QEMU 6.0 will be included in Unraid 6.10 which is coming "soon". For future readers of this thread, if you are not in immediate need of this functionality, I would recommend waiting for Unraid 6.10. If you cannot wait I have a few notes that may help you get this wo
    1 point
  17. Same problem , every Hour Smart Spin Up, it was the Mover Config. I changed from every Hour to Daily and the Smart Spin Up stopps.
    1 point