heffe2001

Members
  • Posts

    411
  • Joined

Everything posted by heffe2001

  1. In the past when I've tried to just re-add a drive back in that had been triggered bad, I'd had to wipe it to get it to accept it again. What would be the procedure to NOT have to do that in this instance? Drop the disk from the array, add it back and then rebuild, or will it allow me to do that? In the past when I've had drives drop due to issues like this, I've usually precleared it again just to make sure the smart attributes and all hadn't been incremented or anything like that, but in this instance, I know EXACTLY what happened, lol. Went ahead and dropped it from the array, started, stopped, added it back, and now it's rebuilding. Like I said, I've always run the clear again in the past to 'test' the drive, but I don't think it's necessary in this case.
  2. I have a few empty drive sleds in a Compellant SC200, and went to pull one to zero out a 6tb SAS drive. I didn't pay close enough attention, and pulled an active drive in the array (the light on that sled is extremely dim, not bright like the rest), and the drive entered an error state (not for any write errors, but 1 read error). Upon reboot, it's got he drive marked as bad, and being emulated, even though it's not ACTUALLY bad. What's the best way to handle this and get the drive back into the array? Pull it, reboot, then reinstall and run a preclear is the process I've run in the past, or would I just be better off dropping the parity drive, readding it, and just rebuilding parity? I'm 99.999% certain there's no data errors (mover wasn't running, and I don't have any processes that write directly to the array disks). Either way I'm going to have to preclear and I'm guessing either way is going to require one a drive rebuild, but parity is larger (10tb), where this was an 8tb, so it'd (i'm guessing) be faster to go that route. Just wondering what other folks process for bone-head moves like this, lol.
  3. The larger drive size was reported on 6.12.1, and reported correctly on the older 6.11.5 version. Both drives passed preclear this last attempt (the one running the preclear docker went through all the steps, except on the post-read, it got to 63% before the docker process hung, but it still passed the drive anyway). The one difference on the 6.11.5 VM preclear, the output during the operations looked very different from the normal preclear on the self-updating status screen. I'm wondering if that has something to do as to how Proxmox passed that drive through to the VM though. I took that drive, put it in my main unraid box, and it verified the preclear signature, so it appears it did at least complete correctly (I've verified both drives at this point, both got valid signatures). I also had already done a media validation on both drives before this last preclear operation doing a format and DD passes by hand, so I'm pretty confident that the drives are fully functional. I'm currently rebuilding my array with one of them replacing a very old 4tb drive, and as soon as that's done, I plan on replacing another with the 2nd drive. I NEED to replace a pre-fail ST8000AS0002 (slowly starting to get correctable media errors), but apparently the HGST 8tb I got, plus the EROS drive format out to just a touch smaller than the AS0002, so they won't work for that.. I'm debating just dropping parity, moving the data from that drive by hand to the new drives, and having the system completely rebuild parity again (I'd split the data from that failing 8tb over to the excess new free space on those replaced 4tb's if I do that). It's either that, or start looking into larger sas drives (my parity is currently only a 10gb drive, so might just grab a couple 10tb sas models and throw them in the mix, lol). Not sure if you've noticed or not though, several posts above yours DLandon says he's having an issue with larger drives and read failures, but that the preclear docker works on those issue-causing drives (it's using the older preclear script I believe), so if you're just wanting to get them all cleared and set up, that would probably be a good way to go to get around the issue with the current preclear plugin script issues.
  4. I'm seeing a very similar problem using the plugin version of the preclear script on 2 8tb SAS drives I got last week. Both get to near the end of the zero phase, hang, and keep restarting until failure (I got the same restart at 0% on I think the 3rd retry attempt). I installed the preclear docker image to test the original preclear script on my server (running 6.12.1 at the moment), and the first drive I tried has so far completed all the zeroing process with the preclear script that's included with the docker version, and is in the final read phase on the drive. This was with 2 different manufacturers (one HGST, one Seagate Exos, both 8tb). The Seagate ran through the zero process 3 total attempts, with 3 retries per attempt, and failed every time, the HGST failed a 3 retry pass before I started it using the docker. I've currently got the Exos drive running a preclear on a virgin 6.11.5 install with just the preclear & UD plugins installed as a VM on my Proxmox server, with the drive passed through to that VM, and it's SO FAR at 83% and still going (VERY slowly, it's at I think 38 hrs on JUST the zero phase, lol, getting I think about 54m/s zero rate). I'll let it go until it either completes, or fails on the zero, and move it back into my primary rig if it passes, and try it under the docker image too. I DO notice that the Exos drive was shown as having a reported size of 8001563222016 total on the preclear plugin version n 6.12.1, where under 6.11.5 it's showing 7865536647168, so I'm not sure where, exactly it was getting the larger size from.. Same controller in both machines, only difference is it's being passed through to the VM directly, and not directly on bare metal. As far as the HGST drive being in the final step (final read), I changed NOTHING on the server or plugins, just installed the plugin docker image (reports Docker 1.22), and started it using that instead of the canned preclear script with it's defaults..
  5. I had the same issue, and if I remember correctly, I had to move some directories/folders around in the data folder. It was either that, or the actual mount point. On mine, it was looking for another directory inside the data folder, THEN the blocklists under that. I'd assumed it was because I was coming from another person's technitium app, and not something with the official one though. Below is my directory structure under the appdata/ts-dnsserver directory. My definition for the data directory on the docker page: If that doesn't work, look at how the permissions are set. I'm now getting the issue where it tells me version information is 'Not Available' since I switched over to the official. I can force-update it, but it never shows that there's an actual update available or that I'm 'Up To Date'. No biggie I guess, since I was coming from a version that was WAY behind the actual releases (6.x I think).
  6. It ended up failing again after 3 reboots. Those lines aren't in my syslinux.cfg. It's definitely not my usb keys (I've tried many, and even bought new Samsung brand keys).
  7. Just wanted to add that this doesn't just apply to Dell servers, I've been having issues booting my flash on my HP DL380p Gen8 machine for months, to the point I changed out all my usb keys, and have to do all updates by hand (and attempt to boot them a dozen times or more till it just happens to work). Added the lines above to my syslinux.cfg file, and it boots every time. As long as updates don't rewrite the syslinux.cfg, it should survive updates now too (haven't been able to do an automatic update since the 6.10.x series started in RC...).
  8. Just a FYI, 2020.05.27a took care of my issue, details are shown now with no issues.
  9. Yep, sure do. I'm currently a version behind latest, waiting to update after everyone here goes to bed, lol. I've verified that I do have director information on my movies, so not sure why it's getting an error on that line. Updated this morning to the latest Plex Pass version, and still the same.
  10. Still not able to get any detailed info on any show that comes up, gives me the same error still as posted above.
  11. When I click on detail, I'm getting the following error on every movie I've tested: Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/plexstreams/movieDetails.php on line 63 Year: Rating:
  12. Seems everything survived through 2 reboots (for other reasons), so it would appear it was that dynamix plugin causing the issue.. On another note, would it be possible to add an option to hide passed-thru drives?
  13. After removing that dynamix SCSI devices plugin I get the desired results.. SO if anybody is having the issue of SAS drives (or raid containers) not showing up on UD, and have that dynamix plugin installed (and aren't using a Areca controller), try removing it and see if it helps.
  14. With Preclear removed it still shows the same (none of the SCSI- drives are listed). I'll look at the ST6000, it's not been used and I honestly can't even remember buying it let alone installing it, lol. I do have the dynamix SCSI Devices plugin installed from when I had an Areca controller in the machine (long since removed). I wonder if that's got anything to do with the issues? I may try uninstalling it soon as my docker restore is done and restarting the machine to see if it makes any difference.
  15. Those would be the ones. The've been giving those 2 warnings since installation, but have worked without issue up to now (and still seem to work ok when I manually mount them). UD won't actually see any of the scsi- devices (I have a couple raid arrays set up on a hp 420 controller that I use directly mapped to a windows VM, but it didn't see them before I mapped them either). Looks like it's basically not finding a grown defect list by that 'error': 0x1C 0x02 GROWN DEFECT LIST NOT FOUND It's not seeing any of these devices: scsi-HUSMM1640ASS200_0QV1M8XA@ scsi-LOGICAL_VOLUME_001438033715B80-part1@ scsi-ST31000424SS_9WK2N0XS00009120NV0P-part1@ scsi-HUSMM1640ASS200_0QV1M8XA-part1@ scsi-LOGICAL_VOLUME_001438033715B80-part2@ scsi-ST6000NM0034_Z4D2NK0X0000R545W10B@ scsi-HUSMM1640ASS200_0QV1NH7A@ scsi-LOGICAL_VOLUME_001438033715B80-part3@ scsi-ST6000NM0034_Z4D2NK0X0000R545W10B-part1@ scsi-HUSMM1640ASS200_0QV1NH7A-part1@ scsi-LOGICAL_VOLUME_001438033715B80-part4@ scsi-LOGICAL_VOLUME_001438033715B80@ scsi-ST31000424SS_9WK2N0XS00009120NV0P@ I'd forgotten about even putting that ST6000 in there at all, since neither the system nor the UD detected it, lol.
  16. I'm having the same issue. I have 2 SAS SSD's installed that were showing up, I tried to add another drive to that controller and it wouldn't show up, so I tried restarting the server just to see if the controller could even see the drive (it didn't, drive is definitely bad, but it does still see the 2 SAS SSD's). On restart, Unassigned Devices can't see those 2 SAS drives now, but you can find them in the '/dev/disk/by-id' to identify them, and even manually mount them if you want (both contain XFS partitions, one I'm using for my plex transcoding cache, and the other I'm going to use for my Docker appdata). I can always edit the go file to mount those 2 drives where they need to be by their 'by-id' info (just in case them move from their 'sd?' designation), but they WERE working in UD, but it will NOT see them now (through about 4 boots so far, and this damned HP DL380p-G8 takes forever to reboot, lol). I'm going to attach my current diagnostics to this post. Also, is there a reason you can't mount a drive in UD wherever you want (I believe this was the old behavior), and not be forced to mount them in /mnt/disks/ ? Or would it be possible to make that a toggleable feature? media01-diagnostics-20200505-0102.zip
  17. When did the ability to create my own directory under /mnt (/mnt/docker-appdata/data) become a forbidden mount point for the docker data area? I needed to shut down my dockers so that I could replace the drive the data is stored on (usually mounted under /mnt/docker-appdata/ with the data directory located in the root of that particular drive). When I tried to turn dockers back on, that directory ends up turning red, and then the system says that I need to enter it in the format required. This has always worked in the past, been set up that way on my system since dockers were enabled. I don't WANT to mount that under /mnt/disks (another issue I have a problem with) and map from there, and I definitely don't want my docker information on the array. I also have a problem with Unassigned Devices not allowing me to mount new drives wherever I want under /mnt (it forces me to use /mnt/disks), although I had it mounting several drives under the main /mnt directory (and still have 2 that do, unless I try to modify them then my mount points are screwed up). Is this 'feature' of not allowing me to put stuff where I want able to be disabled, or am I going to have to reconfigure my entire setup because someone else decided they knew better where my stuff should be mounted?
  18. Would it be possible to get a version of this for the Tesla cards, I have a K20 I'd love to use for transcoding on my Plex server for my remote users (I don't use H265, so the Tesla card would work great for my needs), but it won't work with the stock Nvidia drivers. Nvidia has a seperate driver series for the Tesla cards apparently, the latest of which is available here: https://www.nvidia.com/Download/driverResults.aspx/158193/en-us
  19. What client machines are you using? I saw a post on the Plex group last night about issues with some Android boxes doing what you're describing. Go into the plex settings on that client, and find 'Use New Player' and disable it and see if it works. My father in law is having the same issue, and I'm going to try to walk him through that to see if it fixes his issue as well this afternoon.. Thread here: https://forums.plex.tv/t/new-player-on-android-tv-not-working/545682/2
  20. Anybody happen to be using a HP DL380p Gen8 with one of the Tesla K20 compute boards for things like Plex transcoding, or FoldingAtHome? They are getting cheaper (the k20's), and it'd be nice to take some CPU load off my server when friends are watching stuff remotely (or even set it up with FoldingAtHome for the Covid19 stuff).
  21. I'm using a bonded pair of 10g ports on my HP server, and it's working just fine with RC3, so bonding is installed and working.
  22. I've NEVER had good luck with the small APC bricks, especially on servers. I also use a minimum of a 1000va (I have a rack mount SMT1000RM2U Smart-UPS 1kva powering my server & external drive chassis', and a Back-UPS Pro BR1500G 1500va that powers my switch, router, wireless, etc). Those little power-bar-like backups aren't really intended to run a server, they're more for desktop PC's anyway. What's most likely happening with your dirty shutdown, is the machine hits the battery threshold to shut down (you've got it set for 5%, after a minimum of 5 minutes), and there's not enough juice left to power down safely, and it's dropping out before Unraid fully shuts down. Are you running docker or VM's on that machine by chance? My only complaint with my APC's was that I didn't verify that the SmartUPS I have was expandable, otherwise I'd have it all running on that one box. My only other complaint with my systems is the Norco external array, it's only got a single power plug, no redundant supplies in it :(. If the HP P2000 drive trays weren't so damned expensive, I'd replace it with one of those in a heartbeat, lol.
  23. That was what I was kind of worried about, not knowing who actually made them. Could be anybody really.
  24. So basically (according to their page), white label brand name drives. I'll probably just go with an Seagate Exos ST12000NM0007 instead. Just though with the price difference, someone here may have tried them out.
  25. Out of curiosity, anybody here used any of these Water Panther drives in an unraid setup, or even heard of them before? https://www.newegg.com/p/pl?d=Water+Panther Looking to replace my existing parity drive with a 12tb 7200rpm SAS drive, and was looking at what was available out there..