Jump to content

Frank1940

Members
  • Posts

    10,204
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Frank1940

  1. It will be cleared NOT tested! If you want to test the drive, you will have to install the preclear plugin. The preclear plugin will test and clear the drive so that when you do add it to the array, unRAID will recognize that it has been cleared and will proceed directly to formatting. This takes a few minutes depending on the size of the disk being added but My 3TB drives usually take about two to three minutes and the disks on online ready for use.
  2. Port 80 is the default for http and www traffic. (https is on port 443) See here: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?&page=2 If you restore the line in the go file to the default and reboot, everything should be fine from the GUI standpoint. If you are still having issues with the plugins, you should probably either (1) start a new thread about your problem or (2) post up a message in the support threads for the plugins/dockers involved.
  3. You might want to read the last few pages of the support thread for the preclear plugin. I seem to recall that you are not the only one with this issue.
  4. Yoda is there for me. I am running 6.3.5 on my Local Master server and using Firefox ESR 52.2.1. (I am using this release as the state-of-the-art one does display the check boxes!)
  5. In the back of my mind, I seem to recall that a reboot is necessary to re-enable telnet after it has been disabled. I was going to suggest modifying the Help system to reflect that but when I checked, I found out that that suggestion has already been incorporated!
  6. And disasters (like fire, flooding, earthquakes and theft) are far more likely to result in total data loss than 'bitrot'! You should be providing for protection from them before worrying about the almost infinitesimal danger of losing data from bitrot.
  7. You can probably find the beginnings for a backup server at a garage sale. The MB and CPU requirements are rather light for a strictly NAS server. (If you want dual parity and fast parity check speeds, you need a CPU no later than ~2015. But slow parity checks on a backup server are not near the issue that they can be an production server...)
  8. Well, from what I have seen from the BackBlaze data that seems to be the case. The larger capacity drives seem to have approximately the same failure rates as smaller capacity drives. But the spread of the actual failure percentages (between both manufacturers and individual drive models) might be concealing some real difference. (Remember, we are dealing with very low failure rate rates in most cases anyway.) However, if there is any significant statistical difference, it would be less than a couple of percent at the absolute maximum. There is another factor to be considered. BackBlaze appears to retire their servers on a regular basis as they fill-up with data. Apparently, they scrap all of the drives in the server at this point. While most of us don't do this, many folks have thought about the problem of small drives, limited case space for additional drives and the relentless increase for additional storage space. One approach is to have a large capacity parity drive(s) and to replace the older smaller data hard drives when the requirement for additional storage becomes necessary. As a part of this plan, they also replaced any failed drive with a drive that is at least as large as the parity drive. This way, they avoid having an array filled with drives nearing the end of the 'bathtub' curve and minimize the number of drives in the array at the same time.
  9. You might want to read this: As a further comment on Backups, I would point out that most thoughtful people only backup (offsite) those files which would be impossible to replace by any method. That would be personal financial records, personal photographs, and other items such as this. Most media files are available from other sources and while obtaining them be time-consuming, it is doable. I would rather doubt that you have a 100TB of personal media files that you have taken unless you could afford that yacht. BUT I would bet you probably have at least 300GB of personal files that you could never replace in the case of some catastrophic event that involved the building where your server is. It is those files that you should be most concerned about developing a backup strategy to safeguard against loss that you are comfort with.
  10. I agree with you here. The point is that the drive should not be giving out bad data (or 'best guess' of what it thinks the data is), it should just refuse to deliver any data that is suspect and give a drive failure error instead. So the issue is 'seen' by the user as a drive failure and, depending on the OS, the message could be as simple as a CRC error. That is the $64,000 question! I have a friend who says that he has 30 year-old home recorded VHS tapes that are as good as the day they were made. And there are other folks who lost their home-made videos of their kids in about ten years. All magnetic recorded media will eventually fail... All of the CD/DVD/BluRay ROM will fail... All of the CD/DVD/BluRay WR media will fail even sooner... I have seen statements that the late 20th century and most of the twenty-first century will be come known to future historians as the century without a written archive because of this problem.
  11. If you ever find a case of bit rot, PLEASE post up all about it. Personally, I think it is about as likely to occur as an asteroid the size of Rhode Island hitting your computer. Modern hard drives expect that not all of the data can be read every time the heads pass over it. They have multiple levels of error detection and correction built-in to accommodate this expectation. If the data can't be recovered through these procedures, they are suppose to respond with a error at that point. (And the read operation becomes much slower as the more data is read and more calculations are made in the attempt to reconstruct the data.) The only way a case of bit rot should occur would be if that were some combination of errors were not detected properly and the correction routines returned incorrect data as the proper data. ....
  12. IF you think it is an out-of-memory issue and since you seem to have a lot of memory installed in your server, you might try this: 1-- Install the Tips and Tweaks plugin. 2-- On the 'Tweats' tab, reduce the "Disk Cache 'vm.dirty_background_ratio' (%):" to 2 and change the " Disk Cache 'vm.dirty_ratio' (%): " to double that value (4). This value is the percentage of RAM set aside for this function (Delayed writes to disk) and has been the same forever. While it may be a good value for systems with a few MBs of RAM, it is probably way too much for today's configurations!
  13. This is not a real answer to your question but a few years ago, I added (and for what reason, I don't recall) the following to my go file: # resize log partition mount -o remount,size=384m /var/log I seem to recall that it was done to increase the space available for log storage.
  14. You have to have the old key file in the config directory on the new Flash Drive for the auto update process to work.
  15. These are the files to get the basic system up and running as a basic NAS box with your old settings. You should not copy over super.dat unless you are absolutely positive that you haven't replaced or changed out any disks!!!!! Don't throw away or write to the old Flash Drive until you have the new one working to your satisfaction.
  16. See here for getting new key file for that new Flash Drive: https://lime-technology.com/replace-key/
  17. Describe your issue(s) in detail. What you are doing. How your have your shares setup --- Particularity with regard to security settings. And exactly what error messages you are getting and what you were doing when you got them.
  18. What version were you running that was successful? Apparently, the latest version available from PassMark Software is MemTest86 V7.3. Perhaps, you should make a 'bug' report in the Defect Reports Section on this forum and point out your issues and what you found.
  19. Check and see if there is a later version of memtst. I believe there have been problems in the past as new hardware comes on the market and the program has had to updated to account for changes in the way it functions. EDIT: Plus you should probably read this: https://www.extremetech.com/computing/251499-major-hyper-threading-flaw-can-destabilize-intel-cpus-based-kaby-lake-skylake
  20. I would suggest that you post spec's on MB, CPU and Memory modules. (Manufacturer and Model numbers) Also, have you checked your MB manufacturer's website to see if your memory modules are on their list. You can also check the RAM maker's site for their recommendation for your MB. Have you googled to see if anyone else has had issues with the MB/memory combination?
  21. Many Gurus recommend running it for 24 hours. And it should be without memory errors! If you are getting errors, I would be checking voltages, frequency and timing against the specs for your memory. I would also check that they are matched sets. I seem to recall that if you have all four slots filled, you do have to a bit more careful about having identical strips in each slot.
  22. It has been in the version 6.4.0-rc3 and higher. See here:
  23. Try googling LSI 9211-8i P20 firmware update
  24. My experience was that it depends on file size. Small files require more housekeeping overhead (and disk head movement) for the amount/(quantity) of data being moved.
×
×
  • Create New...