somail

Members
  • Posts

    45
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

somail's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Thanks itimpi. I appreciate the second set of eyes.
  2. Hello, I previously had two drives produce a number of read/write errors. One of which was my parity drive. I re-seated and swapped cabling, and when I booted everything up my parity was in an error state. I then reseated cabling again and ran extended SMART tests on both drives to see if there are any issues. The SMART tests completed without errors and so I plan to reenable the parity drive. However, before doing so I would like to see if someone could review the attached SMART reports for both drives and confirm there are no red or yellow flags in the report. Thank you WDC_WD80EMAZ-00WJTA0_7JHHEJ2G-20191004-2308 parity (sdd) - DISK_DSBL.txt WDC_WD80EMAZ-00WJTA0_7JKUG0KC-20191004-2308 disk2 (sdc).txt
  3. Hi, Due to an issue accessing my Plex docker I logged into my unraid server and saw my parity & disk 2 were experiencing a massive read/write issue (see screenshot). I immediately tried to stop the array, but the system would not respond. I was forced to do a hard shutdown since nothing was responding. The array tried to stop for about ten minutes. I then rebooted so I could run some extended SMART tests on the parity and Disk 2. Everything mounted and acted normally. Dockers ran fine. I left the system on over night to run the extended SMART tests and forgot that my parity-check was scheduled to run at midnight. The check ran for eight hours (about 40% done) and found 427 sync errors on Disk 2 (I know anything more than zero is a lot, but I do not know if 427 is catastrophic or just a single file). Fast forward 12 hours, and I see the parity is still having odd R/W reporting but no errors (see second screenshot). This leads me to think the parity drive is the issue or I have an Unraid issue. Unraid also appears to be confused in regard to the drive being spun up or down. The main tab shows it as being Spun Up while the SMART section in the disk's settings show it as Spun Down. The dashboard shows all disks as healthy per SMART. Note in screenshot 1 the spin up/ down arrows are missing, in screenshot 2 the parity drive shows as spun up but no temp is registered). My questions: 1. Why was the parity or disk 2 not disabled by Unraid when the R/W errors occurred? I thought that was the default behavior. 2. What is the best way for me to test the health of my parity and Disk 2? Based on the SMART tests, both appear fine. 3. Is there a way to know what files were involved with the 427 sync errors? I have also attached my diagnostics output. Thank you for any help. server-diagnostics-20191003-0236.zip
  4. Sorry for the slow update, but I wanted to give closure to the issue. I swapped the SSD to the onboard controller and it is recognized in 6.4.1. The HDD that I swapped to the expansion SATA controller is recognized as well. For whatever reason, the combination of my expansion controller, my Samsung SSD and 6.4.1 creates issues. The previous IRQ errors in my log are gone as well.
  5. Attached. I thought that issue is due to using the old version of the unassigned disk plugin (which I have never used) and also I thought the problem would also appear on the 4.0 (which is doesn't) - Happy to be wrong as it seems like an easy fix. How do I know if I have an incompatible partition? server-diagnostics-6.4.0.zip
  6. I upgraded from 4.0 -> 4.1 and my SSD cache disk disappeared. I have it connected to a PCI SATA add-in card, although the issue appears specific to the cache drive as the HDD drive attached to the add-in card is reported correctly in the Unraid Gui. I downgraded to 4.0 and the cache drive appears again. My logs from 4.1 (attached) show new errors, but I am not knowledgeable enough to decipher them. To be clear, the cache does not appear at all (even when the array is stopped) in 4.1. Given I can see and mount it in 4.0, I do not think it is related to the partition issue identified in the upgrade notes thread. The drive is a Samsung_SSD_850_EVO_250GB Was something new added in 4.1 that would impact the system's ability to see my cache drive? Thanks for the help. Computer locking up after 6.4.0 server-diagnostics-20180206-1900.zip
  7. Thanks Johnnie. I'll keep an eye on the memory usage and see if I can find anything. I will also remove some plugins to see if they are contributing to the issue. A quick fix is probably buying more memory, but I'm not sure if 8GB vs. 4GB (3.5GB) is overkill or standard these days....
  8. Well at least it seems random... Since upgrading from 6.3.5 to 6.4, I am having issues accessing my docker apps and several tabs in the Unraid GUI. Mainly, the docker tab and the dashboard. "Main" always loads fine, but trying to switch to other pages will often result in the page taking a long time to load or timing out. The same behavior is going on with my dockers - sometimes they are responsive and sometimes it's like the server is not even on. I am normally able to reestablish connection to a docker app if I reset it (assuming I can access the Docker page in the Unraid GUI). I did not have this problem on 6.3.5 and I have made no hardware changes. Dockers I am running: Duckdns Plex Plexpy sabnzb sonarr Attached is my diagnostics file. Any trouble shooting advice would be welcome. Thanks server-diagnostics-20180127-2025.zip
  9. Thanks for the help everyone and sorry for the slow reply on getting back to this thread. I checked through all my mappings and could not find any incorrect mappings. I decided to go for broke and deleted my docker image and started from scratch. Everything now works and my data usage looks like this: Label: none uuid: 83fdbf07-310a-4331-b6ca-41461ab16872 Total devices 1 FS bytes used 1.94GiB devid 1 size 10.00GiB used 4.04GiB path /dev/loop0 btrfs-progs v4.0.1 Much better than the 10GB/10GB and 8.3GB file system usage from my original post. I wish I knew what caused this, but I guess after 6 months of continual docker usage maybe this is to be expected..... I am attaching a screenshot of my mappings if someone wants to take a quick look to see if something stands out. Thanks
  10. So I woke up to to my Sonarr (NEEDO) docker no longer working. The Sonarr webpage loads, but nothing from the database is pulled (basically a blank screen with the Sonarr background). Given the errors I am seeing I figure this is not related to the container and Sonarr was just the first casualty. My first attempt was to restart Sonarr. This did not help and the Sonarr log is showing SQL "disk is full" errors. Log (errors at the bottom): http://pastebin.com/smPYAbv4 I then tried reinstalling the docker and pointing to the same mappings as my existing setup to keep my configuration. This gave more disk is full errors: http://pastebin.com/VgHQYfHS This also appears at the bottom of the update page: Warning: file_put_contents(): Only 0 of 4 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix.docker.manager/dockerClient.php on line 297 Warning: file_put_contents(): Only 0 of 24 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix.docker.manager/dockerClient.php on line 453 I then thought that my docker image, which is set at 10GB was full. Assuming I am reading the docker settings page correctly I still have plenty of space: Label: none uuid: 7216ff55-ac0b-4da2-86ed-f17424d2b442 Total devices 1 FS bytes used 8.30GiB devid 1 size 10.00GiB used 10.00GiB path /dev/loop0 btrfs-progs v4.0.1 I am using my cache disk for the docker image and it currently has 43GB free. Last night is was down to 1.2GB but I'm assuming that should not have caused the issue. I also updated from 6.0 -> 6.01, rebooted and the problem continues. The container's name now says "berserk_curie", which is mildly entertaining (see attachment). Does anyone have an idea on how to fix my berserk curie?
  11. Here is how I solved it (sort of). Steps: 1. Turn both dockers off (they should both be set to host mode). 2. Turn SabNZB on. 3. Go into the SabNZB setting (in the app itself) and change the port in the settings. I use 8081 4. Save and Restart SabNZB 5. Start PlexWatch Both should now work side by side. The only issue with this configuration is that SabNZB's port will reset each time unraid is restarted. I am unsure why all other SabNZB settings will stick after reset, but the port will always reset after reboot. Just a quirk I guess.
  12. aptalca: Thank you for the DuckDNS docker. Works perfectly.
  13. It works!! Needo needs to update his instructions. https://github.com/needo37/plexWatch :-) So the final piece of the puzzle is how do I get SabNZB and Plexwatch to play nicely. Both use port 8080. EDIT: ya I am slow. SABNZB and Sonarr both have options to change the port to anything I want. All is good and 100% working.
  14. hmmm I am using bridge so I can remap the ports since SabZNB uses 8080.. I'll try host mode tonight with SAB turned off and see if that works.
  15. Thanks Dase. I still seem to have the issue (I did confirm plexWatch.pl is now an executable). Can you check what you plexWatch user/group permissions are. This is how mine looks: -rw-rw-rw- 1 root root 597 Apr 21 21:43 config.php -rw-rw-rw- 1 root root 13565 Apr 21 21:42 config.pl drwxrwxrwx 2 root root 4096 Apr 21 21:43 db_backups -rw-r--r-- 1 root root 5272 Apr 12 17:27 debug.log -rw-rw-rw- 1 99 users 73728 Apr 21 17:35 plexWatch.db -rw-r--r-- 1 root root 2096 Apr 12 17:27 plexWatch.log -rwxr-xr-x 1 root root 181361 Apr 12 17:26 plexWatch.pl Wondering if I should assign it to root (which is how everything else is set).