landS

Members
  • Posts

    823
  • Joined

  • Last visited

Everything posted by landS

  1. Nice! Can this be one without removing the Unraid usb by chance?
  2. I am awaiting 2 items before this upgrade 1 - NFS file share functionality 2 - bios update on my Supermicro X10SRA-F - which requires a paid for Out Of Band node license key (purchased from WiredZone - delivery pending). Supermicro's bios update 'policy' is oddly hostile.
  3. A huge thanks to @gfjardim & co for keeping this alive and kicking. The ability to preclear - postclear - and stress check (new and antiquated drives alike) is, in my opinion, the most critical tool this platform provides. This plugin (which usually works :) ) is far easier to implement than the old Joe script (which requires a patch command and manual terminal commands to use).
  4. Thanks @rookie for this followup post as i just ran into the exact same issue on what had been a stable system. Turns out I had a GUI session open in my Android phones Chrome beta tab. We had storms which triggered my ups to shutdown the server When it came back up, I too kept getting the windows failed to boot message when passing though the GPU. Restarting my phone actually allowed the VM to startup as it has for --- well over a year --- trouble free.
  5. I can confirm that after rebooting and dozens of access uses via web terminal... Typing exit allows repeat functioning. @limetech can the web terminal send an 'exit' command upon closure to help dummy proof it (as I look in the mirror)
  6. @gfjardim thank you! Preclear is, to me, a critical component of sticking with Unraid . May I buy you a beer or two? (PayPal, bitcoin, etc?)
  7. with 8 hot swappable hdd bays and 2 5.25 bays that Silverstone CS380 looks very lovely... doubly so for only $125. Then up top order 1 SY-MRA55006 Drive Bay Adapter, 1 x 3.5" + 1 x 2.5" HDD/SSD, USB 3.0, 5.25" Bay And also order 1 Icy Dock CPo24 https://www.icydock.com/goods.php?id=237 Drive Bay Adapter, 1 x 3.5" + 1 Slim Optical This will let you have 2 3.5 hot swaps (to finish using up your 6 TB dell disks), give you 1 2.5 SSD slots for cache/vm host/etc, 2 USB drives up front, and a slim optical drive to pass through via an add-on card or usb adapter to your vm
  8. In your shoes I would run *at least 1 pass of the* preclear script per drive which will grant you some significant stats of the drive health. If using these, also consider bumping up to dual parity if you are not already. At work we have 2 dell servers that came loaded with Dell branded Seagate constellation drives. 1 server is windows server os the other Unraid. On the unraid server preclear detected about a 25% bad drive rate which were replaced under warranty. Since deployment a few years ago the unraid server has suffered 1 drive failure. The windows os has been down multiple times for multiple drive failures. YMMV - and 1 shipment of preloaded servers <> the entire population.
  9. Thanks @nuhll I beat on this one for awhile, but alas my ignorance won the battle
  10. Nice! Time for me to dig around as I'd like to subscribe to these. The ability to detect early bathtub failure, to stress check used drives, and to fully wipe/stress decommissioned drives is very important. The preclear script, then plugin, and now (sadly) back to the script has been my preferred method for many years now!
  11. Yes I do - I specifically use Unraid preclear (historicaly preclear script, more recently the plugin) to stress test all HDD prior to using in any other computer... And use preclear to wipe any disks going out of commission... Both in my personal life (friends & family) and at work. It has saved us from a couple handfulls of early bathtub failures.
  12. ssh into the server... who -aH find the PID of pts/# shown in the eventlog immediately after the web terminal failure kill PID# .... doesn't work... as who -aH immediately shows the same pts being used again also tried shell in a box... but it also is throwing the same issue What appears to be the problem is... root / password is not being accepted in Shell in a Box / Web Terminal after x number of uses without a full server restart... but it works fine via SSH [email protected]
  13. I am experiencing the same issue. I have been using the Web Terminal to run MC then just close the window via the little X when done. After a number of sessions, I hit the exact same issue as reported by @nomisco. My log shows: Feb 24 14:49:40 Tower login[25551]: ILLEGAL ROOT LOGIN on '/dev/pts/9' Each time this occurs is always on the pts/9, SSH login works just fine, but lacking the knowledge of how to troubleshoot only a reboot appears to reset the issue. I live in Chrome, but installed Firefox to check - and the issue persists across to a virgin browser... so its not a cache niggle. My SSH keys are not corrupted / zero bytes as per: ls -la /boot/config/ssh Thoughts Crew?
  14. Preclear 6.4.1 confirmations - using a 2 TB Seagate Ironwolf as a guinea pig Patched Plugin works fine! Manually installing the forked plugin @sureguy shared works great – just note that the status on Main\UD takes a bit to update - jumping over to plugins/preclear loads the preclear status immediately. https://raw.githubusercontent.com/dohlin/unRAID-plugins/master/plugins/preclear.disk.plg @mods - any chance this forked plugin link can be noted on the first/top post? … Patched Script works fine! Unzipping JoeL.s script into the root flash folder https://lime-technology.com/forums/topic/2732-preclear_disksh-a-new-utility-to-burn-in-and-pre-clear-disks-for-quick-add/ using terminal to patch it via cd /boot sed -i -e "s/print \$9 /print \$8 /" -e "s/sfdisk -R /blockdev --rereadpt /" preclear_disk.sh works just fine side note cd /boot preclear_disk.sh –l returns all of the disks available to preclear after a single string print error (no big deal) Screen (or Screen -r) cd /boot preclear_disk.sh -A /dev/sdX runs the preclear script to completion and the 3 output reports in the flash/preclear reports folder look great
  15. @gfjardim any interest in a bounty to restore the preclear functionality within current Unraid? The patched Joe L. Script works, but this plugin is sublime.
  16. i ran Joe L.s script on the work machine with Joe L.'s patch - on 6.4.1 It was successful, the 3 preclear reports looked good, the screen session looked fine I can confirm SureGuys finding - however immediately after the line 236 message follows a list of those disks that can be precleared. root@LDB1:~# cd /boot root@LDB1:/boot# preclear_disk.sh -l ====================================1.15 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== ./preclear_disk.sh: line 236: strings: command not found /dev/sde = ata-ST2000NM0033-9ZM175_Z1X3JTYJ root@LDB1:/boot# root@Tower:~# cd /boot root@Tower:/boot# preclear_disk.sh -l ====================================1.15 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== ./preclear_disk.sh: line 236: strings: command not found /dev/sdi = ata-ST2000VN004-2E4164_Z523T4NK root@Tower:/boot#
  17. Thanks Trurl. This machine is in our work environment and the replacement disk has to go through sourcing. The other disks had enough capacity to hold the data & disk2 actually had little data on it, so I thought moving the disk2 data to the remaining disks, then running a new config would be better in the interim as then the entire array would be protected from another disk failure. Leaving the data emulated while we await sourcing to deliver the replacement would not allow the data to be protected from any further failures. Given that the disk was passing all smart tests, and that the data was fully accessible, and no physical changes had occurred I was very interested into WHAT caused the disk to get knocked out of the array. Given that the disk has passed preclear on another backplane slot I am fairly confident that the backplane slot is to blame. Tomorrow I am going to run a preclear on the suspect backplane slot.
  18. So Disk2 which has been in use for a few years now was kicked offline in the array... but the disk has no negative smart report items. I moved the Disk2 to another slot on the backplane, set the array to use NONE in disk2, used MC to move the emulated Disk2 data to disk3 and disk4, then began a preclear on Disk2 (which was no longer physically assigned to the array) Running a preclear on this disk has given me the confidence that the physical port of my backplane has faulted and not the physical disk. To the 6.4.1 naysayeers... Preclear is a critical component of Unraid. It is great to have as a plugin, but will use screen and terminal if the Joe L script is all we are allotted. I personally do not feel comfortable just tossing a bare drive into the array without a few cycles of passes... and have had a goodly portion of drives fail a preclear before ever touching my array. In addition, any data drive that is taken OUT of commission has a preclear run on it to destroy data prior to recycling or sale. --- so long as it is physically able to run.
  19. ... and back to the script from the plugin. thanks for leaving this up joe. (though i miss the plugin greatly already!)
  20. Thanks Frank1940 - i need to run preclears and this saved me a lot of digging to re-enable joes script. Fingers crossed that this plugin will live again in the near future! I feel the following is too important to not post here as well: So Disk2 which has been in use for a few years now was kicked offline in the array... but the disk has no negative smart report items. I moved the Disk2 to another slot on the backplane, set the array to use NONE in disk2, used MC to move the emulated Disk2 data to disk3 and disk4, then began a preclear on Disk2 (which was no longer physically assigned to the array) Running a preclear on this disk has given me the confidence that the physical port of my backplane has faulted and not the physical disk. To the 6.4.1 naysayeers... Preclear is a critical component of Unraid. It is great to have as a plugin, but will use screen and terminal if the Joe L script is all we are allotted. I personally do not feel comfortable just tossing a bare drive into the array without a few cycles of passes... and have had a goodly portion of drives fail a preclear before ever touching my array. In addition, any data drive that is taken OUT of commission has a preclear run on it to destroy data prior to recycling or sale. --- so long as it is physically able to run.
  21. I am now 30 minutes into a parity check and the hottest drive is 46*. Fans at 100%, louder than the originals but tolerable. I would not want to sit next to these at 100% though. If i turn them down even slightly they are nearly the same perceived volume as the old ones and the drives settle at 48 For now I am skipping the noctua pwm speed controller and powering both via a sata to 4pin pwm fan power adapter FANTASTIC recommendation garycase. Thank you
  22. Lovely. Installed the original fans back on the headers, and the results are 1 fan at pwm 90 and the other at pwm 127....under full disk load. 255 is max rpm, so this explains why I saw 700rpm on 1 fan under IPMI... and the high jump in disk temps. Back to running the fans directly from sata adapter for now.
  23. We use a poweredge at work, and In order for Unraid to see the disks we had to use an Unraid supported card. Even if it comes with an h310 PERC, you'll need to flash it to lsi it mode
  24. Oi! Would you be so kind as to share the Magick it takes to accomplish such a feat! Or the terminal command in Lieu of that
  25. I used ipmi, where I noticed the fan speed running at low RPM. The speeds plugin gave me no love.