Giggity_Grant

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by Giggity_Grant

  1. @fatgeek I also ran into this exact same issue. Reverting back to binhex/arch-sabnzbdvpn:3.5.3-1-03 also fixed it for me. No clue what the issue might be, but I thought I'd let you know you're not the only one having these symptoms
  2. @JorgeB I went ahead and re-wire the entire server, putting the seagate drives on the motherboard SATA ports. I also replaced the sata power cable that fed these drives as well. This seems to have resolved the problem for now. Unfortunately, I had already done a hard restart on the server by the time I saw your message, but I've attached a Diagnostics report that I just pulled in the event that it might be helpful. tower-diagnostics-20210308-2040.zip
  3. Well, I finished the Disk 2 rebuild, but in the process of starting the Disk 3 re-rebuild, the Web GUI dropped out and cannot be reached. After the Disk 2 rebuild, I began the process of rebuilding Disk 3 (again, for the 2nd time in 5 days): Stopped array De-selected the HDD for Disk 3 Started array Stopped array Assigned the HDD to Disk 3 As soon as I hit the "start" button to start the array, the WebGui lost all connectivity. Additional Tests / Symptoms UnRaid server no longer appears as an active client on my network Tried pinging the Unraid Server IP. This was unsuccessful, server could not be reached Tried to SSH into UnRaid server - "connection could not be established. network is unreachable" Tried reaching various -arr docker containers via their IP address & port number in web browser, but this was not successful. No docker containers could be reached. Not sure what to do now, other than force an unsafe shutdown (cut power), reboot, and try to start the Disk 3 rebuild process again.
  4. I will try moving the Seagates over to the mobo sata ports after the rebuild completes. Unfortunately, I don't have a spare power supply. After rebuilding Disk 3, I started rebuilding Disk 2. About 10% into the Disk 2 rebuild, Disk 3 failed again. Looks like it will be a few days until I can try using the mobo sata ports.
  5. Hey all, I'm running into an issue with my array regarding failed disks. Disks 1 - 4 will fail one at a time. I've occasionally had this happen with other disks, but it is almost always Disk 1-4: Typically this happens as follows: A disk (Disk 3 in today's case) will randomly fail I start rebuilding Disk 3 Disk 2 will fail during rebuild Rinse & repeat with another 1-2 disk failures Hardware details are in my signature, but my server has a Corsair 850w PSU, Threadripper 2950x, EVGA 2080 Super (undervolted), 10 HDD, 3 NVMe, and 1 SSD. I am also using an LSI 9207-8i with the recommended firmware for Unraid. I don't think it's a power supply issue based on a power draw calc. This was happening once every 1-2 months for about 6-12 months, then stopped about 6 months ago. Sometimes only 1 disk would fail, other times it would be multiple disks in series. So far, I've tried 2 different LSI 9207-8i cards (same firmware on both), reseating the SATA & power cables, and even swapped the SATA cables. None of these have had an effect. Then randomly, about 3 months after the last attempted fix, the failures stopped. After about 6 months of no issues, the disk failures have restarted as of today. Today, I had Disk 3 fail, then during Disk 3 rebuild, Disk 2 failed. I'm still rebuilding Disk 3 currently. Any idea what may be causing the issue or any recommendations as far as what to do / where to look to fix this issue? tower-diagnostics-20210226-1534.zip
  6. Thank you @johnnie.black!! What are your thoughts on the total TBW over one year?
  7. Also, these NVMe drives have been in service for about 1 year. They were in Raid0 config for about 6 months. I moved back to Raid1 a few weeks ago. Granted, I've downloaded alot of Linux ISOs (download and media folder cache use set to "YES"), but 127TB writes seems excessive. I currently only have 39.8TB total stored on my array.
  8. Hi all, I'm running into some significant cache disk IO errors, and am trying to determine if one (or both) cache drives are failing and need replacement. I rebooted my server this morning, only to find that my two cache NVMe drives (2 x Inland Premium 1TB NVMe in Raid 1 cache pool) were not recognized at all. After rebooting again, the system was able to identify and mount both cache drives. However, after array startup, there was a significant amount of cache disk IO errors which kept appearing in the syslog. Additionally, Docker would not start. I balanced and scrubbed the cache drives (with "repair corrupted blocks" enabled) via the GUI, and rebooted, which appears to have stabilized things. Everything appears to be working just fine as of now. With that said, the SMART data for the two cache drives show high error count, and running "btrfs device stats /mnt/cache" shows a huge quantity of IO errors on one of the cache disks. I've tried to run SMART check, but neither quick or extended SMART check will run on either cache drive Does the high number of disk IO errors indicate cache corruption, or is at least one of the cache disks failing and needs to be replaced? Edit: Small update for clarity tower-diagnostics-20200527-0935.zip tower-diagnostics-20200527-0834.zip
  9. Has anyone been able to host a mac photo library on their UnRaid server with good stability? My fiance would like to get rid of her macbook, and just move to a tablet for everyday use. She will still need to backup her iPhone photos, and if possible, would like to still use the mac Photos system. I don't have a ton of space on my Macbook's SSD, so I would like to avoid backing up her photos to my Macbook. In an ideal world, we would both want to host separate photo libraries on the UnRaid server, with an external HDD backup and cloud backup. I migrated my macbook photo library to a user share as an experiment a few months ago. While I was able to migrate the library to the UnRaid server and get my Mac to recognize the library file via SMB, I was never able to import photos from my iPhone to the server library via my Mac. I've read some posts on the forums which mention possible file system incompatibilities which could lead to corruption of the library, which is a concern. I'm not sure if this is a true issue or what the solution might be. I'm open to other ways of accomplishing to goal of course, but so far I've thought of two possibilities: Host Photo Library file on UnRaid server, then connect my macbook via SMB. As I mentioned, I tried this method before and had consistent issues importing photos to the Library from my iPhone via my macbook. Run an OSX VM on UnRaid which would host the Photos Library. The VM OS would be on cache, but would have an array usershare for storage (i.e. store library on array). Given that I would still be utilizing a usershare for Library file storage, would this present the same issues as option 1 or risk of file incompatibility/corruption? Would an unassigned drive (formatted in apple file system) need to passed through and used for file storage instead of using an array user share? Has anyone had any luck hosting their Mac Photo Library files with either of these methods, or any other method for that matter? Thank you in advance for your help!!!
  10. @Squid I was the person having the issues with FCP in Safari via mobile. The problem is actually still occurring in both Safari and Chrome on iOS, but I just avoid FCP until I have access to a computer.
  11. @bastl Thanks for your reply! I originally tried disabling CSM, which seemed to work - but then I ran into the same issues that other folks in this forum mentioned. After seeing your comment, I re-enabled CSM, and stubbed the pci-ids of the 1st slot card. It's back up and running at least. I'm still running through the thread and trying a few of the mentioned solutions. Is there anything in particular that I would have to add in order to get unRaid to pick up a basic GPU in the 3rd slot, or is this something that happens automatically (since the 1st slot GPU is stubbed)?
  12. @bastl how were you able to get Unraid to utilize your 3rd slot GPU instead of the 1st slot GPU (1080ti)? I haven't been able to find a way to force Unraid to utilize my 3rd slot GPU (GT 710) and passthrough my 1st slot GPU (2080 Super), so I just have the passthrough GPU in slot 3 (which is assigned to Node 0) and my unraid GPU in slot 1 (assigned to Node 1).
  13. @Squid running FCP on chrome works just fine. It appears to just have issues with Safari in my instance.
  14. @Squid Just confirmed that I am running v2019.10.06 (see attached screenshot). Nothing has shown up under "...this may take a minute" on any of the attempts that I can recall. I just ran another attempt to verify, and I didn't see anything show up. I'm using Safari on iOS at the moment. I had to VPN into my network from my phone, since I can't do so from my work computer. I'll give Chrome and Firefox a shot when I get home later tonight.
  15. Running Unraid 6.7.2. I had an issue with the hanging FCP scan screen a couple weeks ago that I was able to resolve. After updating FCP today, I am now getting the hanging screen error. The scan never completes, and the screen has not closed for over 30 min. I've tried multiple different attempts, uninstalling/reinstalling FCP, and rebooting Unraid. So far I haven't been able to find a solution to this issue. Here is a thread where I discuss what solved the issue last time.
  16. When testing my https://subdomain.duckdns.org, it just redirects me to google homepage. Is anyone else having this issue? Followed the steps through until the final nextcloud steps, since I figured out that neither my generic subdomain nor sonarr subdomain were functioning properly.
  17. @trurl Based on the thread, I disabled Global C States in my BIOS settings. I didn't notice any immediate changes, but I wasn't able to edit the \\tower\flash\config\go script, as I do not have any windows devices currently and flash only has SMB/Linux share. What did make a difference was turning off the automatic update notifications for plugins. Performing this step dropped the wait time only down to about 3 seconds. I also noticed a major issue on my Unraid network settings. For some reason I had listed the server's own IP as the IPv4 #1 DNS server and had 8.8.8.8 as the #2 DNS Server. I changed these settings to list 8.8.8.8 as the only DNS server and increased my MTU to 9000 (to match jumbo frames setting that's enabled on my network). After making those network changes, Apps load instantly and fix common problems scans successfully. I'm still seeing the same number of error information log entries on the NVME cache drive, but I'm not too sure what to do about that, outside of buying an SSD and moving the cache over to it. tower-diagnostics-20190927-2035.zip
  18. @trurl Thank you for your reply. I'm working on implementing the fixes from the thread that you linked. If that doesn't work, would you recommend replacing the existing NVMe storage with a better known brand (i.e. Samsung 960 evo) or standard SSDs?
  19. I'm using 3 x 140mm fans in addition to the 3 x 120mm CPU radiator fans in a fairly large mid-tower case (Define r6), so there's quite a bit of airflow. Unraid is reporting that the NVMe cache drive is at 29C, so it doesn't appear that it is overheating. This is making the assumption that Unraid is pulling the correct temperature, though. Thank you for sending that link over, I'll read over that and implement those tweaks when I get home from work tonight. In the event that this doesn't solve the issues, would it make sense to completely wipe the OS/cache/array and start fresh, or do you have any other recommendations before moving to that? I'm not quite sure how I would perform a full wipe and fresh install, but I assume that the Unraid v6 manual has information on it. If not, I'm sure theres a guide or thread on this forum that details how to do it. I apologize for my ignorance, it's my first foray into Unraid, or anything outside of Windows or Mac OS for that matter.
  20. @trurl I just read the "need help? Read me first" link in your signiature. Sorry, I apologize for not searching out that post first. Here is my hardware info and add ons/plug ins loaded: Hardware Motherboard: ASRock - X399 Professional Gaming BIOS Information Vendor: American Megatrends Inc. Version: P3.70 Release Date: 08/27/2019 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 16 MB Characteristics: PCI is supported BIOS is upgradeable BIOS shadowing is allowed Boot from CD is supported Selectable boot is supported BIOS ROM is socketed EDD is supported 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 kB floppy services are supported (int 13h) 3.5"/2.88 MB floppy services are supported (int 13h) Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) ACPI is supported USB legacy is supported BIOS boot specification is supported Targeted content distribution is supported UEFI is supported BIOS Revision: 5.14 CPU: AMD RYZEN THREADRIPPER 2950X GPU: MSI GT710 2GB D3 PCIE LP PSU: CORSAIR 850W HXI 80+P FM ATX PSU Memory: 4 x G.SKILL Ripjaws V Series 16GB 288-Pin DDR4 SDRAM DDR4 3200 (PC4 25600) Desktop Memory Model F4-3200C16D-32GVK NVMe: 2 x INLAND 1TB I PREMIUM NVME SSD HDDs: 8 x WD Red (Shucked from WD EasyStore: WD80EMAZ) CPU Cooler: ENERMAX TR4 II 360 AIO CPU COOLER Plugins ca.backup2.plg - 2019.03.23 ca.cleanup.appdata.plg - 2019.09.15 ca.update.applications.plg - 2019.09.09 community.applications.plg - 2019.09.22 dynamix.active.streams.plg - 2019.01.03 dynamix.day.night.plg - 2018.08.03i dynamix.s3.sleep.plg - 2018.02.04 dynamix.ssd.trim.plg - 2017.04.23a dynamix.system.autofan.plg - 2017.10.15 dynamix.system.info.plg - 2017.11.18b dynamix.system.stats.plg - 2019.01.31c dynamix.system.temp.plg - 2019.01.12a fix.common.problems.plg - 2019.09.23 NerdPack.plg - 2019.01.25 unassigned.devices.plg - 2019.09.14 unbalance.plg - v2019.09.07 unRAIDServer.plg - 6.7.2 Docker Containers duckdns binhex-emby I've also attached the .txt file of the syslog rather than a PDF. syslog.txt
  21. You aren't kidding... 1,098 NVMe Error Information Log Entries. I see the following error types: Bluetooth: hci0: failed to open Intel firmware file:intel/ibt-hw- nvme nvme1: missing or invalid SUBNQN field. nvme nvme0: missing or invalid SUBNQN field. EDAC amd64: Node 0: DRAM ECC disabled. EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load. Either enable ECC checking or force module loading by setting 'ecc_enable_override'. (Note that use of the override may cause unknown side effects.) kernel reports TIME_ERROR: 0x41: Clock Unsynchronized print req error: i/o error, dev nvme0n1 AMD-vi: Event logged [IO_Page_Fault BTRFS warning (device nvme0n1p1): failed to trim Tower ntpd[2524]: unable to create socket on br0 (3) for fe80::f2:c0ff:fe87:2c1a%15#123 Tower ntpd[2524]: failed to init interface for address fe80::f2:c0ff:fe87:2c1a%15 SSL connection & authorization errors HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Tower kernel: ccp 0000:08:00.2: psp initialization failed It looks like these are a combination of BIOS (ECC disabled) and OS configuration issues, right? Do you think there are any issues regarding hardware incompatibility? Edit: I'm using Inland Premium NVMe if that makes any difference. Unraid Syslog.pdf
  22. Hi everyone, I've been using Unraid 6.7.2 for about a week now and I've noticed that the web GUI takes a substantially long time to load when I select the Apps or Plugins main menu headers. Another thing that I have noticed is that I cannot get "Fix Common Problems" to successfully complete a scan, as it never stops scanning (even after 30+ minutes). Plugins: 2-3 min load time Apps: ~30 sec load time These times are consistent whether I am on the local network or accessing the GUI through my network VPN. Everything else on the web GUI loads instantly. I'm connected to a gibabit ubituiti network via Cat 7 (USG->8 Port Switch-> Server), and my ISP subscription 1Gps up/down. I'm not quite sure what to do to improve the load times. I haven't moved any files over to the server yet, as I want to get all dockers, VMs, etc installed and running smoothly before I do so (currently only have a few dockers and no VMs installed). I could do a full wipe and reinstall, but that seems like it would be like killing a fly with a sledgehammer. Does anyone have any ideas as far as where to look for a guide on this issue or how to potentially solve the issue? I've attached diagnostics reports for reference. tower-diagnostics-20190926-1648.zip