Jump to content

WizADSL

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by WizADSL

  1. You should probably post a proper diagnostics file rather than just the syslog. Looking at the syslog, something is your system seems to be causing a crash and may be related, I'll leave that to others to analyze. As far as the fans, one possible reason is, as you may know, these servers are designed to be used with the vendor's (in this case HP) certified storage devices. If you are using any normal off-the-shelf drives then it is possible that the system cannot read the drive temperature from the device and will run the fans at high speed as a result. If this is the case you may be seeing some kind of complaint during POST.
  2. I would like to be able to have a more precise way to set the warning and critical disk utilization thresholds. The current system of 1-100% was fine when disks were much smaller but with 14TB drives on the market now, setting the threshold to 99% would still require ~140GB free to avoid an alert. I would suggest either allowing a decimal percentage (99.75% for example) or just use an absolute value in MB.
  3. You might also want to put the flash drive in a Windows box and check it for errors.
  4. How many parity errors are you seeing?
  5. Be careful about running a correcting parity check on a schedule. If a drive starts to fail/has read errors during a parity check (which is very possible since the hardware is under higher stress when checking parity) you could end up with a failed disk and invalid parity.
  6. I would recommend looking up the datasheets for the drives and setting the temperatures based on that. For example my SSD was showing as hot based on the default temperature setting for and HDD and after looking at the datasheet I found my SSD could safely get much hotter.
  7. Assuming you are in the US, here is a good option (this one is already flashed to IT (initiator-target) mode. You will probably need a breakout cable too. https://www.ebay.com/itm/123396360972
  8. It has been my experience that some Dockers (especially anything that backs up data such as CrashPlan, Duplicati, etc) can drastically affect the speed on a parity build due to the fact that they scan the disks. You may want to turn off all of your Dockers and passably disable cachedirs if you are running it until the rebuild completes. This is all assuming you are no longer getting errors from the disks.
  9. WizADSL

    Openvpn

    https://www.ebay.com/itm/123338823467 Fits in the daughter-board slot of the Jetway.
  10. I don't actually know. I assumed (shame on me) that it was a correcting check because that is the default everywhere else.
  11. I agree with what jonathanm has suggested but I would still like a way (Limetech?) to set the automatic parity check after an unclean shutdown to be non-correcting. I had been thinking about writing a post about this for quite some time but kept putting it off. A few days ago a came home to an array that had been working fine earlier that day but was unresponsive. I forced it to power off and then turned it back on. Everything seemed fine so I walked away and suddenly remembered that a parity check would have probably been kicked off so I want to check it. To my surprise about 2 gigabytes in one of my drives had 450 or so read errors and go taken offline. I stopped the parity check and the array, the drive that was marked bad was still online and showed no problems in the SMART attributes (surprising how often this happens). I brought the system down and replaced the drive, it was rebuilt from parity and everything is fine. If that drive had started reading invalid data and that had been written to parity this would not have turned out well for me. The point is that even manual array starts (when doing so will immediately start a parity check due to a dirty shutdown) could still be dangerous if a drive has problems right away as it did in my case.
  12. No, you don't. If you are using apcupsd to talk to the physically connected UPS you can connect other apcupsd instances to that one using the "net" UPS type in acpupsd. Effectively this lets other computers that run apcupsd (using "net") see the status of the apcupsd instance that actually has a UPS connected to it. The communication is between the instances of apcupsd not between the servers and the UPS (which the SNMP card could be used for); only one server is talking to the UPS in this setup.
  13. I don't think the Tripplite uses the same protocol as the APC, that being said you won't be able to get the plugin to read the UPS status. What you might consider is buying a cheap very small APC UPS with a USB port and connect that to your Unraid setup for monitoring the state of the power in your home. You would still have the array connected to the Tripplite 1500 but use the monitoring data from the small APC to know when a shutdown is needed. You can configure the plugin in Unraid to shut down after the system has been on UPS power for 3 minutes for example.
  14. As I mentioned in my original post, I have my scheduled check to be non-correcting. One concern I have is the automatic parity check that occurs when the array comes up "dirty". If the previous shutdown was caused by a drive going bad (which shouldn't but does happen) then the automatic parity check can invalidate your parity. Is there any way to make the automatic check non-correcting? In all cases where a parity check might fail I'd like to evaluate the situation before I allow any corrections just in case a faulty drive or other hardware is the cause.
  15. You can set up email notifications so that when the parity check completes it will tell you if there were any errors.
  16. Not sure if this topic belongs here, please move as needed. It has been my experience that the process of checking array parity is one of the more stressful activities the array typically performs and drives/hardware seem more prone to failure while the parity process is running. The reason for this post is that I've changed my scheduled parity check to NOT correct parity errors because if a drive fails (or starts to) you'll end up with parity that is valid to the array ("parity valid") but not useful to reconstruct a failed drive. This has happened to me more than once but thankfully it was a controller/power problem so the drives did not actually have an issue but parity had to be rebuilt. Should the default parity check be changed to NOT correct errors by default? I'm curious to know if anyone else has had a similar experience.
  17. Based on the error message it looks like it's not even connecting If you open the unraid terminal window and try "telnet smtp.mailgun.com 587" do you get a connection?
  18. The time is the same. Also, listing the contents through FTP and SFTP are also the same. I hadn't mentioned many specifics, but the UNRAID version I am running is 6.5.0 and the drive is a Seagate Iron Wolf 8TB, CPU is an Intel I7 7700K and the system has 16GB of RAM with 59% in use.
  19. I am using UNRAID as a storage location for ARQ Backup using SFTP. ARQ creates a directory structure where it stores backed up files as encrypted fragments. In my case each folder contains about 4800 files. If I try to do an 'ls' or 'ls -al' from the console the directory listing takes about 90 seconds to print. I've done this both from /mnt/user and /mnt/disk thinking that if might be an issue with fuse. I have cache dirs turned off and it seems that I should not need it to get a directory listing within a reasonable amount of time. Any suggestions or insight would be appreciated.
  20. After the preclear runs, you are given the option of sending back statistics through Google forms. If you confirm the dialog it will come back as say it was sent, however the XHR request that is made(against StatsSender.php) fails with the following error returned in the XHR response: <br /> <b>Warning</b>: array_merge() expects at least 1 parameter, 0 given in <b>/usr/local/emhttp/plugins/statistics.sender/StatsSender.php</b> on line <b>26</b><br /> I am on unraid 6.4.0_rc14 and the plugin version is 2017.11.14. I know this does not affect the functionality of the plugin but since it is an error someone is unlikely to find I thought you'd want to know. In case it has any bearing on the issue, I precleared the same disk twice and this occurred after the second preclear. I wasn't looking and don't know if the first stats report succeeded or failed.
  21. Seagate NAS 8TB $249.99 on Amazon. Prime eligible. This is the NAS and not the Archive edition. https://www.amazon.com/gp/product/B01BBKYNJG
  22. I just updated by Crashplan Docker and I'm having the same problem.
  23. I've just noticed that if I use rsync to copy files from a non-array location (it may not matter where the files are coming from, but this is the case for me) to a disk on the array that is being monitored by the file integrity plugin no checksums are being added to the written files. The rsync command I am using is: rsync -avHAX --progress /mnt/SomeDrive/ /mnt/disk1/Destination/ Also, would it be possible to add additional locations outside the array to have checksums added/checked? It would be great if I could monitor all of the files on my system.
  24. The wattage measurement is with a kill-a-watt (there is a UPS in there as well, but I have compensated for it). I am running unraid version 6.0.1. The HDD controllers in use are 3 AOC-SASLP-MV8 (note NOT the AOC-SAS2LP-MV8). The reason for the large PSU is that the case can support 24 drives so I though it best that I purchase a PSU that can handle it. I was aware that PSUs can exhibit poor efficiency at low load but I didn't think that was the cause. My guess is that it is more about AMD vs Intel which is why I wanted to find out what other people's values were. I don't know if my BIOS settings are optimal but I will check (it's a bit of a hassle because the server is headless). The CPUs do seem to be stepping down in speed based on the info shown on the dashboard page.
  25. I'm curious to know what other people's unraid power consumption is when you server is at idle (awake with all drives spun down) with a CPU utilization of no more than 5%. My consumption is about 120 watts under these conditions which seems VERY high to me. I have checked the power consumption of the fans and they are not the issue. The server specs are as follows: 8GB RAM 17 Drives ASRock - FM2A85X Extreme6 Motherboard AMD A10-6800K APU 80+ 850watt Power Supply (don't remember the make/model at the moment)
×
×
  • Create New...