Phoenix Down

Members
  • Posts

    134
  • Joined

  • Last visited

Everything posted by Phoenix Down

  1. Not sure; I did not run into this issues for any of my 8 disks. Maybe there's a hint in this thread:
  2. Just trying to eliminate variables, see if it’s an issue with your large disk or something else.
  3. What happens if you tried a different disk first?
  4. Have you encrypted any other disks in your system? Were they successful?
  5. Check the system logs (top right, left of the solid circle with a question mark). That sounds like the format was unsuccessful for some reason.
  6. See my reply above: Did you start the array back up after you changed the disk type to "XFS Encrypted"?
  7. Check to make sure you copy and pasted the correct checksums into the RAM disk script. Maybe you missed a digit at the end.
  8. If your Unraid version is different than @Mainfrezzer's code example, then it's possible those files are also different. So yes, you should update the RAM DIsk code with your own checksums. Note that doing this doesn't mean the RAM Disk code won't have any issues, just that the code won't abort itself. It might or might not have any issues, but we don't know until someone tries it out whenever a few version of Unraid comes out.
  9. If you look at this line: echo -e "45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1 There are two files in the format of "md5_checksum path_to_file": 45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker 9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor You can use md5sum to generate new checksums on those two files, like: md5sum /etc/rc.d/rc.docker Sounds like @Mainfrezzer's code already includes the new checksums for 6.12.4. But you can double check yourself if you want.
  10. I'm holding out for RAM Disk compatibility before upgrading to 6.12.
  11. Just wanted to say thank you @mgutt! I had an earlier version of your RAM Disk installed. Wasn't even sure of the version, as it didn't have the version number in the comments. Must have been from a couple of years ago. In any case, after upgrading to 6.11 recently, I noticed that my Disk 1 would have writes to it frequently, which wakes it up along with my two parity disks. This is despite me emptying all contents from Disk 1, and all of my Dockers running from the SSD cache pool. Also, new writes should have gone to the cache pool and not the array. I spent hours watching iotop and dstat, and I was about to pull my hair out when I noticed that Disk 1 would only wake up when certain Docker container is running (specifically DiskSpeed and HomeAssistant). On a whim, I looked to see if there is a newer version of the RAM Disk available, and found this thread. I updated the RAM Disk code, and viola! No more disks waking up! Still not sure why certain Dockers are writing directly to the array or why it's always Disk 1, but I'm glad the new code fixed the issue
  12. Awesome, thanks Jorge! Can I assume that this option is enabled by default? When did this option get enabled in Unraid? I found some posts as early as 2-3 years ago that still said TRIM is not supported on encrypted SSDs on Unraid.
  13. Does anyone know what's the latest on this? In theory it is possible to enable TRIM on LUKS encrypted SSDs, but is it enabled on Unraid? https://www.guyrutenberg.com/2021/11/23/enable-trim-on-external-luks-encrypted-drive/
  14. Understood, thanks for the reminder 🙂 Turns out it's a bit easier than the video. Once you've emptied a drive using Unbalance, you just have to stop the array, and then change the disk type of the disk you just emptied to "XFS Encrypted", then start the array back up. Lastly, format the disk and that disk is converted.
  15. Is this 5 year old guide from SpaceInvader One still the best method to encrypt an existing array?
  16. Is there anyone actively maintaining the Autofan plug-in? I've already presented the fix. Just need the maintainer to merge in the one-line code change.
  17. Hi @bonienl, is this the right channel to report a bug? If not, please point me in the right direction I've been noticing an issue with Autofan in the last couple of months. It seems like whenever all of my HDDs are spun down and only my NVME cache drives are still active, Autofan gets confused and thinks there is no active drives, and shuts down all of my case fans. This causes my NVME drives to get pretty hot. After digging through the Autofan code, I discovered the issue in function_get_highest_hd_temp(): function_get_highest_hd_temp() { HIGHEST_TEMP=0 [[ $(version $version) -ge $(version "6.8.9") ]] && HDD=1 || HDD= for DISK in "${HD[@]}"; do # Get disk state using sdspin (new) or hdparm (legacy) ########## PROBLEM HERE ########## [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby` ########## Fix is below ########## [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby` ################################## echo Disk: $DISK - Sleep: $SLEEPING if [[ $SLEEPING -eq 0 ]]; then if [[ $DISK == /dev/nvme[0-9] ]]; then CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1=="Temperature:" {print $2;exit}') else CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1==190||$1==194 {print $10;exit} $1=="Current"&&$3=="Temperature:" {print $4;exit}') fi if [[ $HIGHEST_TEMP -le $CURRENT_TEMP ]]; then HIGHEST_TEMP=$CURRENT_TEMP fi fi done echo Highest Temp: $HIGHEST_TEMP } Check out the line I marked ########## PROBLEM HERE ##########. Specifically middle condition (sdspin). [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby` "sdspin" is a shell script that runs hdparm -C on the NVME device. Here's the contents of sdspin: # cat /usr/local/sbin/sdspin #!/bin/bash # spin device up or down or get spinning status # $1 device name # $2 up or down or status # ATA only # hattip to Community Dev @doron RDEVNAME=/dev/${1#'/dev/'} # So that we can be called with either "sdX" or "/dev/sdX" hdparm () { OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1) RET=$? [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1 } if [[ "$2" == "up" ]]; then hdparm "-S0" elif [[ "$2" == "down" ]]; then hdparm "-y" else hdparm "-C" [[ $RET == 0 && ${OUTPUT,,} =~ "standby" ]] && RET=2 fi If I run the command directly: # hdparm -C /dev/nvme0 /dev/nvme0: drive state is: unknown # echo $? 25 This the same exit code that sdspin returns: # sdspin /dev/nvme0 # echo $? 25 My cache drives consists of 2x Silicon Power P34A80 1TB m.2 NVME drives. Apparently hdparm cannot get their power state, and because sdspin is looking for the word "standby", it never finds it. More importantly, the middle (sdspin) condition always sets $SLEEPING to sdspin's exit code, which is 25 in this case. And because 25 is not zero, this causes the script to think all disks are in standby mode (even though my NVME drives are still active), thus causing Autofan to shut off all case fans. My fix is simple: remove the middle condition: [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby` Because the last condition is looking specifically for the word "standby" and not just taking the exit code, it works. This is because hdparm says my NVME drive's state is in "unknown", which is not "standby". That means the script correctly considers the NVME drive as NOT in standby. I've locally modified the Autofan script and it's been running correctly for a few weeks. Unfortunately my local changes gets wiped out every time I reboot the server, so I'd appreciate it if you or the author can update the script to fix this bug. Thanks in advance!
  18. You know, I haven't actually tried that. Let me try and let you know if it doesn't work. Thanks!
  19. Thanks for the new docker! I'm still on the old nunofgs docker and want to migrate to your new version. However, I have a ton of plugins and customizations. What's the best way to bring them over to the new docker? Should I just copy everything over from /mnt/user/appdata/octoprint?
  20. For sure, when I do the migration, I'll post an update in this thread.
  21. That's what I'm using as well. I noticed the same thing - it hasn't been updated for a long time. There is another Octoprint docker in Community Apps that seems to be frequently updated. I plan to migrate over, but haven't had the time to do it. You can update all of the plugins EXCEPT for Octoprint itself. That will blow everything up, as you saw.
  22. The plugin has no GUI. Command line only.
  23. I recommend you use the plugin instead. I've given up on trying to get the Docker to work properly. It's also no longer being maintained while the plugin is.