FreeMan

Members
  • Posts

    1518
  • Joined

  • Last visited

Everything posted by FreeMan

  1. Since you are having issues, I certainly hope you're not trusting your only copy of valuable files to unRAID just yet. Number 1 step: Take a deep breath. Number 2 step: Write a nice, detailed post including your exact hardware specifications and exactly the steps you've taken Number 3 step: INCLUDE THE DIAGNOSTICS.ZIP file. You can generate this from the GUI at Tools -> Diagnostics or from the CLI if the GUI isn't working for you Number 4 step: Wait patiently for one of the many experts to help you sort it out. They will do so quite willingly for free if you ask very nicely.
  2. Based on your subject line I understood the card was hot, but the screen shot threw me off. Rereading your OP now, I see what I missed the first time through. Pretty impressive that one failed fan could allow the case temps to climb that high! Unfortunately, not the kind of impressing you're after, I'm sure.
  3. For future reference: The recommendation for buying an external drive is to plug it in via USB ad run a couple of pre-clear passes on it to assure yourself that you're past the infant mortality stage then take it out of the case & install it internally. It will take a bit longer to run but helps avoid moments of panic and not being able to return dead drives.
  4. As has been said several times, rants will not get you anywhere. Any software you buy from any company you buy it from will not be supported if the developer goes bust, be it Microsoft, Oracle or anyone else. That's the risk we all pay when using commercial software. I hope you don't use any OSS software because there is zero guarantee of support there and a significant chance that the dev will get tired of the project and drop support completely. When I bought my license key, Lime-Tech was Tom. Now there are 2 or 3 employees. Everyone else you here from here are users who are happy to answer your questions for free. If you'll patiently work your way through your issues one at a time, I'd imagine you'll be very impressed with the level of support you receive from all these unpaid volunteers (all 99.9% of whom pay their their own electric bills) - the depth of Linux and unRAID knowledge they're willing to share, for free, is quite astounding. You mentioned a USB3.0 drive and a USB3.0 port. It's been recommended that you not use a USB3.0 port, no matter what kind of USB key you have. For starters, have you tried ensuring you're not using a USB3.0 port? Also, I do recall that there were issues with different brands of USB stick back in the 4.x days, but I don't know if those still apply. Someone with more knowledge than I will probably chime in on that.
  5. I'm going to say that if you post diagnostics for the failing machine, people will be able to stop asking questions and taking guesses and actually be able to provide useful input. Might be worthwhile to post diagnostics for the non-failing machine, as well (just make sure you label them very clearly), just for comparison purposes.
  6. Frankly, I don't think 90°F is all that hot. I'll grant you that cooler is always better, but I have 1 drive that often gets to 37°C (98.6°F) and while I get annoyed that it's so warm, I've never (in 3yr, 10mo of spinning) had an issue with it. I wouldn't have panicked at those temps, but it is your server and you're free to do so. Just supplying a data point for you.
  7. Thanks, @johnnie.black, I figured that was the case since it was just sitting in a box and wasn't being used. Anybody need a controller card? Available cheap! May not work well with unRAID.
  8. I just discovered this controller card sitting in a dusty box under my desk. The Amazon link shows it's a "PCIe 2.0 x2 HyperDuo Raid Controller Card" using a Marvell 88SE9230 controller. I know I've seen many mentions of issues with Marvell controllers, but I don't know if this is one of the ones that folks have had issue with. Also, I don't know if there's a way to get it to expose each disk individually or if it's locked into some form of RAID. Any thoughts or input? I have a feeling I tried using this in my server many years ago under 4.x or an early 5.x version, but I just don't recall. If I get a negative response, I'll be sure to print it out and stuff it in the anti-static bag so I don't have to ask again in a couple of years when I rediscover it again.
  9. Maybe try posting your system diagnostics "Tools -> Diagnostics" that includes when you try starting your VM and get the error. That should help the experts here figure out what's going on.
  10. Fix Common Problems weekly scan ran and warned me that: Template URL for docker application cops is missing. The template URL the author specified is https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/cops.xml. The template can be updated automatically with the correct URL. For several of my dockers. I clicked the "Apply Fix" button, and that seems to have taken care of it (i.e. a rescan didn't show the issue any more). What causes this issue and is it anything to be concerned about in the long run?
  11. Great info, @johnnie.black. Reading those 2 comments make the tunables make so much more sense than I remember last time I read about them. Now, I just need to find some time I can take the server off-line to run the tunable tester and see what a difference I can make. We now return you to your regularly scheduled thread.
  12. Well... thanks for the semi-hack! And now that I'm figuring out the new forums, I think I'll get a notification now of when the official plugin is posted.
  13. I think that if anyone has an issue with that memory footprint, they should drag their server into the 21st century. Sounds like a good plan! Looking forward to the updates, and now that I discovered there's a plugin (and installed it!) I'll be sure to get them! You may want to consider updating the OP to indicate (up front and in big red letters) that there's a plugin now. I scanned the OP a day or two ago (i.e. looked at the bottom where the attachments are) looking to see if there was an update to the version I'd DL'd a while back. It wasn't until I started browsing back through previous posts that I started seeing mentions of a plugin.
  14. Please don't do that any time soon. I'm sure we're not the only ones who would miss you. Maybe Lime-Tech would be willing to host the images for you since we're all counting on them being around for a while. Alternatively, maybe you could host them in a Github project. Just how much memory will the server take? Will it need to run all the time or just when running a disk speed test? Would it be a reasonable possibility to fire up the server when a test is requested then shut it down when the testing is done, or, possibly 30 minutes after the last test has run/the server's been accessed? (i.e. If I run it for one drive, then test again for a 2nd drive, it wouldn't make sense to fire up the server & shut it down twice, but it would make sense to fire it up, run the first test, then the timer starts. A couple of minutes later, I run a 2nd test, the timer resets, etc.) Sorry if none of that made sense to you, I'm not completely sure of the purpose of the server other than to collect the drive types that are being used in the wild.
  15. I've noticed that one disk takes an extreme amount of time to be checked by FIP. By extreme, I mean 30+ hours. I've got a dozen data disks: root@NAS:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 932G 914G 18G 99% /mnt/disk1 /dev/md2 1.9T 1.8T 22G 99% /mnt/disk2 /dev/md3 932G 929G 2.9G 100% /mnt/disk3 /dev/md4 1.9T 1.9T 9.5G 100% /mnt/disk4 /dev/md5 3.7T 3.7T 33G 100% /mnt/disk5 /dev/md6 1.9T 1.9T 16G 100% /mnt/disk6 /dev/md7 1.9T 1.3T 571G 70% /mnt/disk7 /dev/md8 1.9T 1.9T 151M 100% /mnt/disk8 /dev/md9 932G 932G 55M 100% /mnt/disk9 /dev/md10 2.8T 2.8T 25G 100% /mnt/disk10 /dev/md11 3.7T 2.1T 1.7T 56% /mnt/disk11 /dev/md12 3.7T 1.9T 1.9T 51% /mnt/disk12 Of the 4TB drives (md4, 11 & 12) 5 & 12 run to completion in 6-10 hours (don't recall off the top of my head), but disk 11 regularly takes the 30+ hours. Using jbartlett's excellent drive performance test all 3 of those disks are performing at right about the same speeds. (FIP is currently processing disks 11 & 12, so there were some files open that probably slowed them down a bit.) Disk 5: HGST HDN724040ALE640 PK1338P4GT2D9B 4 TB 133 MB/sec avg Disk 11: HGST HMS5C4040ALE640 PL2331LAG6W5WJ 4 TB 100 MB/sec avg Disk 12: HGST HMS5C4040ALE640 PL1331LAHE2R0H 4 TB 102 MB/sec avg So I mark this up to the very large number of files on disk11: root@NAS:/mnt# ls disk5 -ARl|egrep -c '^-' 74328 root@NAS:/mnt# ls disk11 -ARl|egrep -c '^-' 2311532 root@NAS:/mnt# ls disk12 -ARl|egrep -c '^-' 380921 Which brings me back to the question I asked here - how do I properly exclude directories that I don't want to have checked? Since asking that question, I changed my directory exclude settings to have full paths from root, then executed 'Clear', 'Remove', 'Build', 'Export' for each and every disk in turn in an effort to update FIP's understanding of what it's supposed to do, but I'm still getting bunker reports of hash key mismatches on directories that should be excluded. I've set the "Exclude" paths from /mnt/users, do I need to exclude /mnt/diskx instead? I would think doing this would be a major pain since I'm writing to user shares that can easily span multiple drives - to begin with I'd need to exclude the paths from every existing disk, then I would need to remember to update my FIP settings every time I add a new disk. (Granted, I don't do it that often, but that's still a royal pain.) I've confirmed that disk11 does contain a large portion of the files I'd like to exclude from FIP scanning. Is this an issue with how FIP is skipping the paths in the "exclude" setting or how I'm defining them, or is there something else I'm missing completely?
  16. That is exactly what I did. When I got my first disk converted from RFS to XFS, I started the FIP running against that one XFS drive. Each time I got a drive converted, I built & exported that disk and added it to the check schedule. Eventually I got the whole server converted to XFS and now all drives are being tested on a regular basis.
  17. If I understand this properly, this script is populating a database at "this" web site and that's what you're using to display all the cool graphics? I've added 4 new drive types, and I'm really looking forward to testing this new version. Keep up the great work!
  18. Well, I'm not sure where it came from, but I've deleted it and it hasn't come back yet, so I'm guessing that it won't. I'll consider this closed. Thanks for the input everyone!
  19. Then you should be able to tell it to trust parity, unless you have dual parity configured.
  20. Thanks, Jonnie. I'll keep an eye on it.
  21. I can't help you with the system recovery (you're in excellent hands with johnnie.black anyway) but I can vouch for CrashPlan as a backup solution. It's $60/year for a personal, single machine, account with unlimited disk space. I backup all my Win machines to my server and include that backup path in the paths that CP is pushing to the cloud. Backup speeds are fast, they're currently holding 1.2 million files at 3.3GB disk space for me, and I've made test and real recoveries quickly and with no issues. Once you get the immediate issues resolved (or maybe even sooner!) you may want to look into it. I'm using gfjardim's docker to run mine, though there are others.
  22. Additional info: The case hasn't been opened since Feb/Mar time frame, so there shouldn't have been any SATA cables being dislodged. Server's been up for 56 days now, and last reboot was for the 6.3.5 update.
  23. And... that's what I get for not reading. (Did I mention that I hadn't had my first cup of coffee yet and, therefore, really shouldn't have been messing about with server settings to begin with?) Also, thanks, @dlandon for another great tool to supplement unRAID! As you were.