bigrob8181

Members
  • Posts

    15
  • Joined

  • Last visited

bigrob8181's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I was able to get this to work yesterday with quite a bit of tinkering and figuring things out. NOTE: YOU WILL BE MESSING WITH INTERNAL UNRAID FILES. IF YOU BREAK THINGS I AM NOT RESPONSIBLE!!! FOLLOW THIS AT YOUR OWN RISK!! These are the nuts and bolts to it: My /boot/config/go file now includes the following to make this all work: # Fonts # # Replace the ttyd binary and chmod mv /usr/bin/ttyd /usr/bin/ttyd_orig cp /boot/config/custom/ttyd/ttyd /usr/bin/ chmod a+x /usr/bin/ttyd # Symbolic link the libs ln -s /boot/config/custom/ttyd/libs/libjson-c.so.4 /usr/lib64/libjson-c.so.4 ln -s /boot/config/custom/ttyd/libs/libev.so.4 /usr/lib64/libev.so.4 # Create custom terminal command mv /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php.orig cp /boot/config/custom/ttyd/OpenTerminal.php /usr/local/emhttp/plugins/dynamix/include/ chmod a+r /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php First clue was using this issue/commit for ttyd. On a separate machine I did the following: Clone the repo for ttyd. I followed this (Build from source (debian/ubuntu)) to build my own ttyd binary with the changes from the commit. I had to manually add the changes. I followed this to build the custom inline.html using the comments from the previous commit. Ex: This has to happen before you run the yarn run start or yarn run build commands in each terminal window. note: every time you run yarn run start it will wipe the dist dir that you build with yarn run build. I moved the dist folder (renamed to index) and custom built ttyd binary to the unraid box under /boot/config/custom/ttyd/. I figured this would be a good place for it as it survives reboots. Just make sure its under /boot/config and the go file accurately reflects the path. After getting the files in place, I chmod a+x the binary and ran to see what libraries were missing, since it wouldn't run. I was missing libev.so.4 and libjson-c.so.4. I copied these files from my build machine to a libs folder under /boot/config/custom/ttyd/. Now I could run the binary fine. Last step was to implement the new binary with the custom arguments in unraid. I found the call to the binary was in /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php I copied this file into the /boot/config/custom/ttyd/ folder. I changed line 54 to if ($retval != 0) exec("ttyd-exec --index=/boot/config/custom/ttyd/index/inline.html --client-option fontFamily='Fira Code Retina Nerd Font Complete' -i '$sock' bash --login"); note: the font family name instead of the file name or path. I had to play with this to get the correct naming scheme. Probably could have run fc-list to see it on the build machine. This name just so happens to be the font I used. After figuring everything out I made copies of the original files and moved my new ones into place and symlinked the libraries. code listed in the go file above. I have yet to reboot the machine to verify it survives a reboot. I think it should, I will update the post if I find it does not to correct any problems. To get my terminal layout this is added to a .bash_profile and copied on boot (from the go file) to /root/. OS_ICON= # Replace this with your OS icon PS1="\n \[\033[0;34m\]╭─────\[\033[0;31m\]\[\033[0;37m\]\[\033[41m\] $OS_ICON \u@\h \[\033[0m\]\[\033[0;31m\]\[\033[0;34m\]─────\[\033[0;32m\]\[\033[0;30m\]\[\033[42m\] \w \[\033[0m\]\[\033[0;32m\] \n \[\033[0;34m\]╰ \[\033[1;36m\]\$ \[\033[0m\]" To restore back to unraid's defaults after applying these changes do the following: # Restore the ttyd binary to original rm /usr/bin/ttyd mv /usr/bin/ttyd_orig /usr/bin/ttyd # Remove symbolic links rm /usr/lib64/libjson-c.so.4 rm /usr/lib64/libev.so.4 # Restore the original Terminal command rm /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php mv /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php.orig /usr/local/emhttp/plugins/dynamix/include/OpenTerminal.php Remove added code from go file
  2. Ok, we are back up. Funny thing is after the rebuild on the new drive, it was unmountable or no filesystem. I checked the drive and there was a bad primary superblock. I allowed the check to correct it, started the array with it in disk4 and everything seems to be there. I am going to just keep the bad disk stored in case i do run into any corrupted files that way i am able to possibly recover. Everything i have checked looks good though. Thank you all for assisting.
  3. Would it be corruption only on the failing drives files or across the array? Since I still have the data I should be able to recover any corruption unless it's in a bad spot of the failing drive by a simple rsync command I would think.
  4. I should add the good drive got to like 5% zeroed again so like 20 to 30 minutes with the array up and no disk 4.
  5. I may have done a new config for both removing the zeroed drive AND for moving the zeroed drive to slot 4. I understand now what I should have done. Thoughts on new config with the failing drive back in the original location start, stop, then replace with the good drive? Any concerns with parity on the other drives? If I need to do an rsync between the 2 drives after to ensure all data is correct that's fine. At this point I still have the data, just looking for the quickest and most complete path forward. Sorry for misunderstanding ND causing a mess although not a complete disaster.
  6. For the time being, I canceled the disk clear and shut down the server. Figure I need to protect what's there until I get a response.
  7. Ok, so i attempted what was instructed above and must have messed something up. Disk 6 is now in disk 4 spot. Unraid is re-zeroing. It is NOT emulating the failing disk. Parity is valid (although i am concerned, because the data is not emulated, so across the disks i would think it might mess something up) No data from the failing disk 4 is on the array, however its still on the disk. Is parity being marked valid when its clearly not going to be problematic? I suppose i need to allow the zero to happen again and then use unbalance to move the data back onto the array. Is this the correct way to proceed?
  8. That's correct. I was adding to increase array size, but with disk 4 failing I decided it would be better suited replacing it. I have a rack server case being delivered today so I decided whatever I do, it'll have to start after the case migration.
  9. Diagnostics are attached. I was getting a lot of notifications this last week. I have not gotten any additional notifications nor emails since i accidentally rebuilt the failing drive onto itself. Disregard, right after i sent this, i got a notification on disk error and the counter is at 4. I suppose that makes it an easy decision to just swap it out, and expand the storage on wednesday. I suppose i will just have to go dual parity in the future. 👍 zeus-diagnostics-20220627-1134.zip
  10. Ok, so the rebuild of the failing drive is finished, and at the moment there are no errors in unraid. The smart data is as follows: Reallocated sector count: 525 Current pending sector: 56 Offline uncorrectable: 266 UDMA CRC error count: 14 With no more errors in unraid, would you suggest replacing the drive or see how things play out? I still have the one drive zeroed and ready to drop in, i also have an additional 8TB coming in the mail on wednesday. In the event i do not replace the drive, i would consider using the drive coming in the mail for a second parity, and the zeroed drive for additional storage space in the array. I am not interested in purchasing an additional drive in the near future to replace the failing drive, but also am not wanting to trash a drive that may still be good.
  11. Ok, well 14 hours to go then. Currently at: Reallocated sector count: 290 Current pending sector: 936 Offline uncorrectable: 266 UDMA CRC error count: 14 I do have a new 8TB drive coming in on tuesday that i can just drop in. Just needs to make it until then.
  12. I may have already started down another path by accident, and not sure how to proceed. I stopped the array, and removed the failing device from the array. Started the array. I stopped the array with the hopes of moving the zeroed drive into slot 4. No dice (2 missing drives) Trying to put things back to normal, added drive 4 back, started the array, and unraid is doing a rebuild on the failing drive (thinking its new for some reason). What is the path forward now?
  13. Looking for a bit of advice. Backstory: Last week i replaced my 8TB parity drive with a new 14TB drive. (Old parity drive still good.) Around the same time I noticed a different drive was beginning to have errors (disk4), but nothing that really worried me. (I've never lost a drive so wasn't very sensitive to the issues) Yesterday i installed a LSI HBA and moved all drives to it. Installed a second cache SSD, and brought up the array. Began zeroing the old parity dive to add more space to the array (added as disk6). Yesterday in the late afternoon, the problematic disk (disk4) started spitting out tons of errors. Unraid has not marked it failed yet, but all the tell tell signs are there. (been reading forms since i first encountered the issues) Reallocated sector count: 102 Current pending sector: 2216 Offline uncorrectable: 266 UDMA CRC error count: 14 Smart short test Errored out. I just want to take the disk that is zeroed (disk6)(not formatted yet) and replace the failing disk (disk4), but since I've added it to the array, despite not yet being formatted and whatnot, unraid sees it as an additional missing disk if i try to move it. Parity "should be" good at this point, however it checks monthly at the beginning of the month, which is in a few days. Path forward?