dtlokey

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by dtlokey

  1. Hi all. i recently started getting this error today (as far as i can tell, i was in radarr as recent as a couple of days ago with no issues). it only happens with radarr. once I start the docker it immediately repeats this error message with slight variation every second. the docker is also inaccessible as well during this time frame. attached is my diagnostics file. the items that have happened recently are a drive replacement and parity rebuild. the docker is on my cache drive. I also was using unbalance recently to move some files around to reformat a drive, but that happened about a few days ago and radarr was working fine once it finished that process. as far as i can recall i don't have any docker related files on my array and they should all live in the cache drive. any help would be appreciated. I'm currently running Unraid 6.11.1 thanks! mediaserver-diagnostics-20221019-0808.zip
  2. i feel like this is disk activity as the temps ramp up for just that drive usually. I haven't done a parity check in about 2 weeks and after it i reset the counts if i recall correctly.
  3. Hello Unraid community, I'm currently running Unraid 6.9.2 on a ryzen 7 1700 build using an asus Prime B350 Plus MB with 24GB of memory and two LSI Logic SAS 9207-8i Controllers. I've had an issue for a while now (pre 6.9.2) where my Disk 5 goes through a constant read process for some reason that i cannot identify. I don't believe I've ever installed any docker/appdata folders on any drive other than my cache drive and I don't see any currently in the individual disk's directories, so I'm stuck as to why this older disk with older content that typically doesn't get accessed is by far the most busy disk in the array for reads (currently sitting at 23,600,000 reads vs my current disk with space available sitting at just 2 million over a period of just under 10 days of uptime. diagnostics attached. any help would be greatly appreciated as I've had this issue for a long while now, the drive itself other than the high read rate hasn't given me any issues as of yet (knocks on wood) in all the time I've been using it. d mediaserver-diagnostics-20210714-1713.zip
  4. Sorry been a crazy week. Finally got around to situating the server and running the smart report, within the unraid UI it states it passed without error. here's the attached report. mediaserver-smart-20210702-0215.zip
  5. Hello Unraid community, I'm currently running Unraid 6.9.2 on a ryzen 7 1700 build using an asus Prime B350 Plus MB with 24GB of memory and two LSI Logic SAS 9207-8i Controllers. I recently had some issues which prompted me to relocate my server to another room. once i got it back up and running initially it did not find my partity drive (which i'm sure was due to some jostling that likely loosened a data/power cable) after i completely removed and reseated the connections for the drive and restarted unraid it came up as expected and things were as they once were. Fast forward about about 5 days later and I am seeing that while the server was running it encountered some errors and the drive is no longer online currently. I haven't moved or touched anything since getting it up and running so I want to make sure things look ok before i walk through the process of trying to re-enable the drive as a parity drive. any help/reassurance would be appreciated! Thank you. mediaserver-diagnostics-20210622-0910.zip
  6. don't have any usb devices plugged in outside of my boot flash. I did go ahead and update to the latest version of 6.3.0 as well as update all of my other plugins. I'll keep my eye out for the error to show again but it hasn't since, Server's been running for 4 days now, I had to reboot after the update of the unraid OS but since posting this post no unexpected lockups. I'll have to look into the bios update next. the one item i am seeing in my email every morning around ~5am is this message: error: Ignoring tor because of bad file mode - must be 0644 or 0444. not sure what this pertains to. in a quick search i saw it could be related to the preclear plugin being out of date but currently my preclear plugin is up to date. any ideas? i'll post my latest logs. mediaserver-diagnostics-20170209-2241.zip
  7. Hi all, I've been dealing with this problem on and off for the past few months. The time frame between freezes can vary between a few days to a few weeks. I recently added the fix my problems plugin and made the suggested adjustments (recently as in today) but the summary suggested I submit a diagnostic to the forum for assistance due to some call traces so hoping someone can help point me on the right path to returned stability, and in turn, back into the WAF zone lol. mediaserver-diagnostics-20170202-0955.zip
  8. Long time since i've posted this issue, got a new PSU and also wanted to give it a good amount of time and use before i posted my issue as resolved. turned out to be the PSU causing the issues i was having. i've not had one issue since now that i have a PSU with a single 12v rail and it's been aprox 2 months and change since going that route. seems that was the solution to my issue.
  9. Yes, it could easily cause the issues you are having. Most multi-rail power supplies use only one of the rails to power all the hard disks, and they frequently share it with the motherboard as well. The other rails are used for power-hungry video cards used by gamers. Many users have reported issues when going over 6 or 7 drives on a multi-rail supply. Since your PS rails are rated at 20 Amps, and 24 Amps, but we have no way to know which is used for your disks, it might be why you did not have issues until you went to 9 disks. Joe L. thanks, i'm going to grab a Corsair Cx 600M and go from there if it's any help here's the smart report (long version) that i just did once i receive the PSU i'll post results. smartl.txt
  10. i also just noticed that this is the only disk in my array that wont spin down with the others....
  11. thanks, just checked and the cable i have does indeed have the bump for the drive giving me most of the issues. though it's not a latching style cable. i am running 9 drives total on a PSU rated at 600W but not on a single 12v rail, could that be causing these issues? that was going to be my next move if the logs didn't point to anything obvious
  12. I recently went through the process of trying to fix a gremlin that still seems to exist with my unraid server. I have no parity drive installed. the issue i originally had was with a HDD constantly erroring out when i'd try to write files to it. i did a smartctl long test and found no issues with it. swapped cables, etc got to the point where once i rebooted it wouldn't even allow me to add the drive back to the array... long story short i ended up swapping motherboards and changing out my sata controller card and bought a few new drives to replace the one i thought was bad (even though it worked fine for weeks hooked up to my windows 7 machine) fast forward to today. new board in, new controller card, new hdd along with old disk, no parity still i just wanted to get my server up and running again. everything worked fine for about 2 weeks but then a couple days ago on the new drive (which i precleared multiple times prior to adding it to the array) starts giving me a massive amount of errors and won't allow me to add new files to the new drive. (i had already added ~200GB prior to this issue to this drive) i'm at a loss i have the same issue as before on a new drive with new hardware. the only old remaining parts are memory, flash drive, PSU, other original HDD's, and processor. this issue is driving me a bit up the wall as my server was rock solid for years until about 5 months ago or so. I currently running Unraid 5.0 Asus M5A78LM LX Plus 4GB DDR 3 1333 Thermaltake TR2 600W PSU Vantec UGT-ST644R PCIe 4 channel sata raid host card AMD FX 6300 1x2TB Samsung 5x2TB WD Green 2x2TB WD Reds (both of these were giving me issues, now only one is) PSU has 6 sata connectors so the other two drives are running off molex to sata adapters i recently attempted to run a smart report on the WD red giving me issues now (drives only been in use for a few weeks now) and i get this message when attempting to run it root@Media:~# smartctl -a -A /dev/sdd smartctl 6.2 2013-07-26 r3841 [i686-linux-3.9.11p-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: /3:0:0:0 Product: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. i've attached the sys log for when the errors occur please help me figure out what these issues stem from syslog.zip
  13. 1. it is an ASrock Motherboard 2. i did not run a parity check before i did the upgrade. Currently i reverted back to 4.7 and it's running as it has prior to the upgrade. i just cannot add my 3TB drive for the time being. should i run a parity check now while running 4.7 to check and make sure there's zero sync errors and disc errors? i also currently do not have, nor have i ever had a parity drive at all. the 3TB was to be my parity but i read that i needed to upgrade to 5.0 in order to utilizing drives greater than 2.2TB. thought i'd mention that. thank you for your reply, really am lost as to what to try now....also is there any inherent issue going from the AiO version of 4.7 to the i386 stable version of 5.0? i did apples to apples (aio to aio) but just wanted to be sure.
  14. any idea guys? simple fix? i've done as much searching as i could beforehand so i'd really appreciate some help with this.
  15. hi, i'm currently running unRAID Server Pro version: 5.0-rc16c coming from ver. 4.7 per the instructions provided. I powered down my server, removed the jump drive. I then proceed to follow the provided instructions on the wiki for prepping & migrating from 4.7 to 5.0 I then safely remove the jump drive and reinsert it in my server and power it up it boots to the main menu fine. i see all my drives exist. the problem lies in all my disks are listed as correct except disk4 which states the disk is the wrong one even though it's actually not at all In the identification field it lists within the drop down "WDC_WD20EARS-00MVWB0_WD-WMAZA3735795" as the disk serial. but right below that it lists "WDC_WD20EARS-00MVWB0_WD-WMAZA3735795 which from my old screenshot (Wasn't sure if i would need to do any configuring) happened to be the drives name under 4.7 so it's the exact same drive but the naming convention for the serial seems to have changed for some unknown reason.... can someone help to point in a valid direction to get this resolved. again the ONLY thing i did prior to upgrading from 4.7 to 5.0 was power down and remove my Flash drive to do the upgrade on another computer. i didn't touch anything within the server internals itself at all. system_log_unraid_5.0.txt