Joe L.

Members
  • Posts

    19009
  • Joined

  • Last visited

  • Days Won

    1

Joe L. last won the day on March 5 2017

Joe L. had the most liked content!

4 Followers

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Joe L.'s Achievements

Grand Master

Grand Master (14/14)

19

Reputation

  1. Yes, and I fixed my original post. (and I've used "sed" for over 40 years)
  2. unfortunately, google no longer operates code.google.com, therefore, the release list file cannot be accessed from them any longer. A zip file of the entire unmenu source tree can be found at this link: https://code.google.com/archive/p/unraid-unmenu/source/default/source you can download the zip file, un-zip it, and have access to all the awk,shell files, and package definition config files within it. Joe L.
  3. Google no longer supports downloads of individual files from code.google.com. unmenu cannot be installed using those instructions. Joe L.
  4. I only (very) recently put 6.2 beta on my server. I did not have any issue pre-clearing the second parity disk I have just added to my array. The fix will need to wait until I add/replace one of the existing disks with a larger one. (Otherwise, I have no way to test the process. ) Whatever the fix might be, it must be backwards compatible with the older releases of unRAID. In the interim, you can type this command to "patch" the preclear_disk.sh command First change directory to the directory holding the preclear_disk.sh command. For most, it will be cd /boot then type (or copy from here and paste) the following: sed -i -e "s/print \$9 /print \$8 /" -e "s/sfdisk -R /blockdev --rereadpt /" preclear_disk.sh Your preclear disk script will be edited and should work with the two changes you mentioned. (actually, each occurs in two places, so there are a total of 4 lines changed) Joe L.
  5. I agree that there is value in stress testing the drive and checking to make sure nothing is failing after the first few writes. That said, maybe this signals that a new plugin needs to be made that removes the clearing portion of the plugin and instead focuses entirely on stress testing. Leave the clearing entirely to the OS since that's not an issue anymore. This should allow more cycles of stress testing without having to have that long post read cycle (that verifies the dive is zeroed) meaning you can do more cycles faster... I think. I think you are missing a part of the equation. It is not only the stress introduced by the testing, the elapsed time is an integral part of the entire process. You are both missing an important part of the equation. Un-readable sectors are ONLY marked as un-readable when they are read. Therefore, unRAID's writing of zeros to the disk does absolutely nothing to ensure all the sectors on the disk can be read. (Brand new disks have no sectors marked as un-readable) Sectors marked as un-readable are ONLY re-allocated when they are subsequently written to. It is the reason the preclear process I wrote first reads the entire disk and then writes zeros to it. (It allows it to identify un-readable sectors, and fix them where possible) The entire reason for the post-read phase is because quite a number of disks failed when subsequently read after being written. If you rely on unRAID to write zeros to the disk and then put it into service, the first time you'll learn of an un-readable sector error is when you go to read the disk after you've put your data on it. (or during a subsequent parity check) The new feature in this release of unRAID will help some to avoid a lengthy un-anticipated outage of their server if they had not pre-cleared a disk, and for that it is a great improvement. This improvement in unRAID 6.2 does not however test the disks's reliability in any way, nor identify un-readable sectors (since it only writes them, and does not read them at all) Additional discussion about the difference between the unRAID 6.2 initial zeroing of drives replacing the preclear process should continue in another thread... and not clutter up this thread in the announcement forum. Joe L.
  6. Actually, the preclear script logs its reports on the flash drive in /boot/preclear_reports You might look there. If the report is not there, then it finished the clearing step, but not the post-read phase to see if it was successfully zeroed. Joe L.
  7. Ha! I think you need glasses if that's all the change you see But I totally get the need for a better description than the change-log. I don't have the time right now to go into details and I don't remember everything I did but this is what I wrote earlier: I have added adaptive depth level, to prevent cache_dirs from thrashing disks when they are otherwise occupied and cache is evicted. I found the cache was often evicted with the number of files I had when system become occupied with other things. I added the ability to adjust depth automatically based on whether scans are judged to cause disk access or not. It judges that a disk has been accessed during scan if scan takes a long time or if any recent disk access was made (and no recent disk access was made before scanning). The purpose being to avoid the situations where cache_dirs will continuously search through my files keeping disks busy all the time. Before it was also rather difficult to tell if cache_dirs was 'trashing' my disks, now its quite clear from the log if logging is enabled (though the log is rather large at the moment). If disks are kept spinning for some consecutive scans, the depth is decreased, and future rescan is scheduled at higher depth. If the file '/var/log/cache_dirs_lost_cache.log' exists then it will write a log that is easily imported into spreadsheet (excel) so its easier to check whether it trashes disks with current settings. I also added the kill I mentioned and some other quite minor bug-fixes. If you need more let me know, and I might supply more detail over christmas. If you think it looks good and useful I might do a clean up run on the script. I havn't felt like spending more time on the script if nobody but me used it. Best Alex no, not moved on... Just have precious free time to be as heavily involved as I was a few years ago. (when I was not working.) My servers are both built with out-dated hardware. I cannot contribute in the same way I did in the past. (One is an original server sold by Limetech, with IDE based drives, the second newer, but incapable to handle virtualization) I do follow the threads... and respond occasionally... Joe L.
  8. Or, the SATA cables are picking up noise from adjacent cables. (adjacent power OR SATA cables) This often occurs when a user attempts to make their server look neat by bundling all the SATA cables together. When doing so, it is putting into place a situation where induced noise is very likely. Therefore, cut the tie-wraps bundling cables together. Yes, it looks less neat, but... you'll see far fewer noise induced CRC errors. Joe L.
  9. Thanks for the info, I had never seen this. I am trying it now, although I think it is not the perfect choice for my case, since errors, appear in different places of the HDDs. If the errors are in different places each time, it is more likely to be a memory problem, disk controller problem, or a power supply problem. Very first thing to check is to run a memory test, preferably overnight (or at least several full passes). As often as not, a bad memory strip is the issue. Joe L.
  10. You are welcome. If you think about it, much of the newer (now native) features were originally implemented in unMENU, and unMENU was originally created to explore alternative and improved user-interfaces to unRAID. It has done exactly what it was created for. I am happy that it still offers substantial value after all these years. No, unMENU does not offer "docker" or "virtual server" features... but it certainly holds its own with almost everything else. (unRAID itself does not yet allow you to choose a vertical vs. horizontal menu bar... they still have some catching up to do)
  11. First, no software (including Preclear) writes to the BIOS. This is actually a common problem with many motherboards. Whenever you change the installed drives list for the system, the BIOS may decide to "help" you, and reorder the boot order so that the most likely hard drive will be booted, which is usually NOT the USB drive you had configured! You did the right thing by going into the BIOS and correcting the boot order, making sure the right drive is booted, not what the BIOS *thinks* is the right drive. Thanks Rob - I agree in a sense, but I actually selected a "seen" USB Bootable Hard Drive and it/they still failed. Maybe the Bios still changed it to the Cleared (not PreCleared) hard drive as it showed "no Bootable disc found". Still an interesting and "freaky" thing to witness. It worked fine until the PreClear "failed" then would not boot until it was reset. Dave Even though the pre-clear had failed (detected it had not filled the disk as expected), it could have written what looks to the BIOS as a valid master-boot-record to the hard-disk being cleared. In other words, as RobJ said, your bios was trying to "help" you by choosing one of your hard-disks to boot from that it thought had a valid master-boot-record, and since none contain actual code to boot from, nothing would boot until you set the bios back to boot from the correct usb-flash-drive.
  12. Which is why I will NOT be upgrading to 6.1 unless this is made compatible or the features from this that I use are put into the unRAID GUI. We have the basic chicken vs. egg issue here. I do not typically upgrade my server to the latest version until it is out for a few days. Therefore, I have no way to test or make changes to unMENU. From what I've read, the /root/mdcmd shell command no longer exists. unMENU uses it to get the status of the array. /root/mdcmd was just a interface to /proc/mdcmd. You could try putting it back into place and see if everything starts working once more You can re-create it by typing: echo 'if [ $1 == "status" ]; then' >> /root/mdcmd echo ' cat /proc/mdcmd' >> /root/mdcmd echo ' else' >> /root/mdcmd echo ' echo $* >/proc/mdcmd' >> /root/mdcmd echo 'fi' >> /root/mdcmd chmod 755 /root/mdcmd Ha. I was just looking at this too and see you responded. I was thinking to create a file called /root/mdcmd that looks like: #!/bin/bash /usr/local/sbin/mdcmd $* That work also? If so, there should be a command in the go file that if /root/mdcmd does not exist, to create that file. This would allow for the mdcmd command to be updated in a future release (not sure if ever would). Anyway, just my $0.02. if it moved, that would do it. As I said earlier, I'm not running the newer release, so have had no opportunity to look around. even easier is to type: ln -s /usr/local/sbin/mdcmd /root I just upgraded to the latest release of unRAID 6.1-rc5, and it seems to be all that is needed to get unMENU working again. You can add a line to the config/go file to perform that link command each time you reboot.
  13. Which is why I will NOT be upgrading to 6.1 unless this is made compatible or the features from this that I use are put into the unRAID GUI. We have the basic chicken vs. egg issue here. I do not typically upgrade my server to the latest version until it is out for a few days. Therefore, I have no way to test or make changes to unMENU. From what I've read, the /root/mdcmd shell command no longer exists. unMENU uses it to get the status of the array. /root/mdcmd was just a interface to /proc/mdcmd. You could try putting it back into place and see if everything starts working once more You can re-create it by typing: echo 'if [ $1 == "status" ]; then' >> /root/mdcmd echo ' cat /proc/mdcmd' >> /root/mdcmd echo ' else' >> /root/mdcmd echo ' echo $* >/proc/mdcmd' >> /root/mdcmd echo 'fi' >> /root/mdcmd chmod 755 /root/mdcmd Ha. I was just looking at this too and see you responded. I was thinking to create a file called /root/mdcmd that looks like: #!/bin/bash /usr/local/sbin/mdcmd $* That work also? If so, there should be a command in the go file that if /root/mdcmd does not exist, to create that file. This would allow for the mdcmd command to be updated in a future release (not sure if ever would). Anyway, just my $0.02. if it moved, that would do it. As I said earlier, I'm not running the newer release, so have had no opportunity to look around.
  14. Which is why I will NOT be upgrading to 6.1 unless this is made compatible or the features from this that I use are put into the unRAID GUI. We have the basic chicken vs. egg issue here. I do not typically upgrade my server to the latest version until it is out for a few days. Therefore, I have no way to test or make changes to unMENU. From what I've read, the /root/mdcmd shell command no longer exists. unMENU uses it to get the status of the array. /root/mdcmd was just a interface to /proc/mdcmd. You could try putting it back into place and see if everything starts working once more You can re-create it by typing: echo 'if [ $1 == "status" ]; then' > /root/mdcmd echo ' cat /proc/mdcmd' >> /root/mdcmd echo ' else' >> /root/mdcmd echo ' echo $* >/proc/mdcmd' >> /root/mdcmd echo 'fi' >> /root/mdcmd chmod 755 /root/mdcmd
  15. Since lime-tech is in release-candidate-2 of 6.1, I'd not expect new features, but instead just tiny bug-fixes so they can get to 6.1 final. (I can't speak for lime-tech, as I'm a customer, just like you, so it is always possible they would throw in something at the last moment... but I would look to a community plugin rather than something in 6.1 natively) Joe L.