Jump to content

Phoenix Down

Members
  • Posts

    138
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Phoenix Down's Achievements

Apprentice

Apprentice (3/14)

22

Reputation

1

Community Answers

  1. I just checked, and I misspoke. It has been so long that I forgot what I did. I actually have it set up as bash script that is executed daily via User Script. I seem to recall my modifications kept getting wiped out. Probably from every time Autofan updates. It's possibly less of an issue now, since the last Autofan update was in February 2023. Here is the script. You will need to substitute the path to your own modified autofan script. Once the modified script is copied over, you need to kill then restart autofan for each of your fans, so the updated version can take effect. You can find the exact commands in the system log when Autofan starts. Don't just copy mine, since the argument values are specific to your system. #!/bin/bash echo Copying modified script from '/mnt/user/Data/Scripts/autofan' to '/usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan' cp /mnt/user/Data/Scripts/autofan /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan echo Done. echo echo Starting Autofan... echo /bin/bash /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan -c /sys/devices/platform/it87.2624/hwmon/hwmon5/pwm3 -f /sys/devices/platform/it87.2624/hwmon/hwmon5/fan3_input -q echo echo Waiting 5 seconds before restarting... sleep 5 echo /bin/bash /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan -c /sys/devices/platform/it87.2624/hwmon/hwmon5/pwm3 -f /sys/devices/platform/it87.2624/hwmon/hwmon5/fan3_input -l 25 -t 35 -T 55 -m 5 /bin/bash /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan -c /sys/devices/platform/it87.2624/hwmon/hwmon5/pwm4 -f /sys/devices/platform/it87.2624/hwmon/hwmon5/fan4_input -q echo echo Waiting 5 seconds before restarting... sleep 5 echo /bin/bash /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan -c /sys/devices/platform/it87.2624/hwmon/hwmon5/pwm4 -f /sys/devices/platform/it87.2624/hwmon/hwmon5/fan4_input -l 25 -t 35 -T 55 -m 5
  2. I know it's super late, but I just saw this. Not sure how I missed it the first time. Hopefully my fix worked for you.
  3. I reported this issue 2 years ago, along with the fix. Sad to see it's still an issue. But I understand that developers are volunteering their own time, and they have their own lives, which should always takes priority. In any case, take a look at my post and the associated fix. I have a local fix in place, and set it up so the local fix is applied at Unraid startup (place in the go file):
  4. Running into the same issue with Proton Mail. Anyone with any resolution?
  5. Not sure; I did not run into this issues for any of my 8 disks. Maybe there's a hint in this thread:
  6. Just trying to eliminate variables, see if it’s an issue with your large disk or something else.
  7. What happens if you tried a different disk first?
  8. Have you encrypted any other disks in your system? Were they successful?
  9. Check the system logs (top right, left of the solid circle with a question mark). That sounds like the format was unsuccessful for some reason.
  10. See my reply above: Did you start the array back up after you changed the disk type to "XFS Encrypted"?
  11. Check to make sure you copy and pasted the correct checksums into the RAM disk script. Maybe you missed a digit at the end.
  12. If your Unraid version is different than @Mainfrezzer's code example, then it's possible those files are also different. So yes, you should update the RAM DIsk code with your own checksums. Note that doing this doesn't mean the RAM Disk code won't have any issues, just that the code won't abort itself. It might or might not have any issues, but we don't know until someone tries it out whenever a few version of Unraid comes out.
  13. If you look at this line: echo -e "45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1 There are two files in the format of "md5_checksum path_to_file": 45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker 9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor You can use md5sum to generate new checksums on those two files, like: md5sum /etc/rc.d/rc.docker Sounds like @Mainfrezzer's code already includes the new checksums for 6.12.4. But you can double check yourself if you want.
  14. I'm holding out for RAM Disk compatibility before upgrading to 6.12.
  15. Just wanted to say thank you @mgutt! I had an earlier version of your RAM Disk installed. Wasn't even sure of the version, as it didn't have the version number in the comments. Must have been from a couple of years ago. In any case, after upgrading to 6.11 recently, I noticed that my Disk 1 would have writes to it frequently, which wakes it up along with my two parity disks. This is despite me emptying all contents from Disk 1, and all of my Dockers running from the SSD cache pool. Also, new writes should have gone to the cache pool and not the array. I spent hours watching iotop and dstat, and I was about to pull my hair out when I noticed that Disk 1 would only wake up when certain Docker container is running (specifically DiskSpeed and HomeAssistant). On a whim, I looked to see if there is a newer version of the RAM Disk available, and found this thread. I updated the RAM Disk code, and viola! No more disks waking up! Still not sure why certain Dockers are writing directly to the array or why it's always Disk 1, but I'm glad the new code fixed the issue
×
×
  • Create New...