Grrrreg

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by Grrrreg

  1. If anyone has a moment to poke at the diagnostics, or point me towards a good starting point to look at in the logs. Logs are just after a hard reset of the server. Server was unresponsive, could not ssh to terminal. Cheers and Thank You, Greg seine-diagnostics-20231229-2103.zip
  2. Thank you trurl and JorgeB for reviewing. It did finally stabilize at normal speeds after a while, now at 13 hours remaining. I've just never seen it take so long before. @trurlThank for pointing out the docker.img size and libvirt.img. I had about 35GB of orphaned images I hadn't paid attention to, deleting those brought it down to the low 50GB's. I need to do some additional clean up of other duplicate/test containers. Thank you @JorgeBI did stop several dockers thinking they might be adding to it. Next time I'll stop all but critical containers. Thank you
  3. Hello All, I swapped an old 4TB WD Blue for a shucked WD easystore 12TB. Normally, it takes between 2-3 days to complete an array rebuilt or parity check. The estimated time for rebuild is bouncing between 3 and 16 days, mostly staying at towards the 16 day side of the spectrum at about 15MB/s rebuild speed. I'm not overly concerned, more curious if there is anything I should dig into, or if it's might be something harmless like a large number of small files slowing the rebuild. Dual E5-2660 v2 @ 2.20GHz [CPU utilization averaging 8%] with 64GB ECC RAM [25% Used]. I've gone attached a fresh diagnostics download. Thank You PS. 22 disk+2 disk parity array, 201TB array, with mixed 12TB and smaller disks. Supermicro X9DRi-LN4+/X9DR3-LN4+ , Version REV:1.20A seine-diagnostics-20221016-1831.zip
  4. [RESOLVED] Loving the application, great progress. I'm having issues saving backups as root. [2022-01-24T01:59:25.274Z] ERROR: [BackupManager] Invalid user attempting to create backup There was one initial backup that worked, once I deleted it I was able to start creating additional backups.
  5. ***Update*** It seems /var/log was full, after expanding /var/log and a reboot, preclear is displaying correctly. We'll see if that also corrects the issue with preclear status once it completes. _____________________________________________________________________ I precleared a drive last week, which appeared to finish without error, but it was unable to upload the stats and afterward, the drive did not show as being precleared. I've updated to 6.9.2 with version 2021.03.18 of preclear. I decided to go ahead and run another preclear on the drive. Now, the preview and log are acting odd, scrolling continuously. Let me know if you'd like a diagnostics post or anything else. tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" unRAID Server Preclear of disk 35504A5642525646 Cycle 1 of 1, partition start on sector 64. Step 1 of 4 - Zeroing in progress: (1% Done) ** Time elapsed: 0:11:33 | Write speed: 206 MB/s | Average speed: 208 MB/s Cycle elapsed time: 0:11:35 | Total elapsed time: 0:11:35 S.M.A.R.T. Status (device type: default) ATTRIBUTE INITIAL STATUS Reallocated_Sector_Ct 0 - Power_On_Hours 142 - Temperature_Celsius 27 - Reallocated_Event_Count 0 - Current_Pending_Sector 0 - Offline_Uncorrectable 0 - UDMA_CRC_Error_Count 0 - SMART overall-health self-assessment test result: PASSED tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" Thanks for your time! seine-diagnostics-20210408-1852.zip
  6. Hi ich777, Thank you for taking the time to review my issue. I did force the container update as suggested on the first page of the thread. It seems after switching from linxserver container to binhex-plexpass, the transcoder was having permission issues. Ultimately, I deleted the transcode variable from the container and replaced it with a path variable pointing to /tmp. Hardware accelerated encoding is now working. Thanks again for your great support and hard work on the plug-in!
  7. Good Afternoon, A thousand thanks for all the hardwork on this plug-in! I just upgrade to 6.9 from 6.8.3 following the plug-in instructions, plug-in removed and re-install, docker disabled/enabled, with multiple reboots. Hardware encoding is now broken, playback stoping after 2-3 seconds and erroring out. An unknown error occurred (4294967283). Hardware decoding is working, and I've just disabled HW encoding for the time being. It was working without issue for the last year. I did remove the container and re-installed it. repo: linuxserver/plex Plex Version: 1.21.4.4079 X9DRi-LN4+/X9DR3-LN4 2x Nvidia GP107GL [Quadro P400] What information can I supply, and what troubleshooting steps would you recommend? I've also trying switching to binhex's plexpass container, which has the same behavior of stopping after a few seconds. Should I post this issue somewhere else? seine-diagnostics-20210305-1319.zip
  8. Thanks Squid, I have two replacement dimms on order. I just need to figure out which dimm to replace. I haven't had any new errors log since the original event. Once the replacements arrive, I'll run a memtest or the supermicro offline memtest and hope they identify which dimm to replace. I think I've figured out the SM recommended memory config and what would happen it I just tried to upgrade the memory in regards to the reduced speed per dimms per channel etc. I love my older server config, the number of cores etc, overall it's worked well. Things have changed a lot since I bought my first 20MB SCSI hard drive in 1990. I think there's a lot of great hardware out there that has lots of life left in it, but at times some of us don't fully understand how best to keep it running. I really appreciate all the advice and help from the forums.
  9. Hello Everyone, I was noticing some instability and rebooted my server the other day. Today I saw the notice and instructions in Fix Common Problems to install the mcelogs plugin. It looks like there are some memory errors. I've got about an hour left on the parity check, but am wondering what my next steps should be. I could definitely use some recommendation on what's the best practice with server memory. From reading the SuperMicro guide, it seems like it would be not recommended to just remove the faulty module, but maybe I'm reading that wrong. I was thinking I'd shutdown the array and reboot and run a memory check from bios. SEL logging was turned off in the bios, now enabled. Also, in replacing the bad dimm, I was thinking of getting 4 new dimms, same spec, but larger capacity, 8 or 16GB dimms and replacing the other three in the same channel/rank etc for a small upgrade. I'm still trying to make sense of all the complexity of server memory, so all advice is welcome and appreciated. Thanks in advance for your help! -Greg Server UNRAID 6.8.3 SuperMicro - SuperStorage 6047R-E1R24N MB: Super X9DRi-LN4F+ Processor 1: Intel Xeon E5-2660 v2 2.2GHz 10 Core 25MB Cache Processor Processor 2: Intel Xeon E5-2660 v2 2.2GHz 10 Core 25MB Cache Processor Memory: 64GB (16x4GB) PC3-10600R 1333MHz DDR3 ECC Errors Dec 15 16:05:47 Seine kernel: mce: [Hardware Error]: Machine check events logged Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: CPU 10: Machine Check Event: 0 Bank 7: 8c00004000010090 Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: TSC d7f4672046556 Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: ADDR bce285600 Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: MISC 207e5286 Dec 15 16:05:47 Seine kernel: EDAC sbridge MC1: PROCESSOR 0:306e4 TIME 1608077147 SOCKET 1 APIC 20 Dec 15 16:05:47 Seine kernel: EDAC MC1: 1 CE memory read error on CPU_SrcID#1_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0xbce285 offset:0x600 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:1 ha:0 channel_mask:1 rank:0) Dec 15 20:01:16 Seine kernel: mce: [Hardware Error]: Machine check events logged Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: CPU 10: Machine Check Event: 0 Bank 7: 8c00004000010090 Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: TSC d9b8bf06f72be Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: ADDR bce285600 Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: MISC 407c0086 Dec 15 20:01:16 Seine kernel: EDAC sbridge MC1: PROCESSOR 0:306e4 TIME 1608091276 SOCKET 1 APIC 20 Dec 15 20:01:16 Seine kernel: EDAC MC1: 1 CE memory read error on CPU_SrcID#1_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0xbce285 offset:0x600 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:1 ha:0 channel_mask:1 rank:0) seine-diagnostics-20201216-1138.zip
  10. Hopefully this works. sdq is a not part of the array. After reading the manual on my backplane, it only uses on 8087. I've got a new parity check running now after some issues created trying to pass an entire USB pcie card to my VM. Estimated time, 2 days, 9 hours, 52 minutes for 132TB array, I had to replace one 4TB WD Red that was failing since the last post. seine_diskspeed.7z
  11. What's the best way to share the DiskSpeed results?
  12. @Benson - 2 8087 cables from the backplane to the 9211-8i, originally the SM server came with those connected to a MegaRaid 9265-8i. 2 - 12TB parity drives and the other 22 drives on that backplane are data, image of array attached and current diagnostics. Running Diskspeed now, looks like the parity drive possibly the bottle neck, though with a parity check running, I don't know if that's valid. I'll run another once the parity sync ends. seine-diagnostics-20200302-1348.zip
  13. I've got a 126TB array with dual 12TB parity, mix of WD Red and White/shucked easystores. My parity syncs are taking an average of 2 1/2 days to complete. Currently it's running about 35-55MB/s. Supermicro X9DRi-F with SAS2 backplane connected to a single SAS9211-8i (pcie 2.0). I have one pcie 3.0 x16 slot available, and an additional 3.0 x8. (Two of the three x16 slots are populated with Nvidia P400 GPu's for Plex and a VM.) Would I expect much performance gain by splitting the backplane to two SAS9211-8i, or upgrading to a pcie 3.0 x16 controller? I have a spare LSI MegaRaid 9265-8i, but I don't know if it's worth trying to flash to IT mode as I can't find consistent information that it will work. Additional consideration, it's running about 15% CPU during parity with normal docker/VM load.
  14. Thanks, using USB 2.0 for the flash. It seems stable now. I noticed one of my unassigned devices, EVO SSD, seems to have dropped offline. I'll try to get into the server and check cables tonight, direct attached to mobo.
  15. The attached diagnostics is post-boot, wish shares visible again. I'll grab a new copy of diagnostics if/when they disappear again.
  16. ***Systems been stable since removing an unassigned device drive that was failing., Thanks for the responses and aid. My shares have disappeared. I uninstalled the Network Statistics plugin, rebooted and my shares were back, but after less than an hour, they've disappeared again. I'm rebooting now. What logs should I be looking at to start troubleshooting this? Thanks, Greg seine-diagnostics-20200224-1302.zip
  17. I'm having the same issue with it hanging at download, https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/6-8-2/nvidia/bzimage. I've not had issues before with having pihole enabled, tried with it disabled, still hanging at download.
  18. That's the disk, WD 4TB that was disabled for errors, that's been pulled and is rebuilding on a replacement drive, WD 12TB white. Should I wait for the rebuild to finish and then check the file system on the rebuilt drive? Currently estimating 2 days to finish parity sync/rebuild. Or, should I stop the rebuild, replace the original drive, check/repair the file system, sync parity, and swap the drive after that? So much to learn.... Thanks!
  19. Thanks for all your help, I'll mark this as solved. I started a separate post now that the array is up, about some empty shares. I'm thinking file systems are corrupt. Thanks again!
  20. Hi, My USB died yesterday. I had to manually create a new USB, and transferred the license. First parity sync finished, and I've replaced a failing drive, new parity sync/rebuild in process. I started to redownload docker containers, and I noticed my Downloads share appears empty, but when I user Krusader to try to create the missing subfolders, it says they exist. Upon further checking, there are other shares that appear to be empty now. I've attached the current syslog file and diagnostic logs. I think I need to check the xfs file system on all the data disks as my next step, but the array is currently parity sync and data rebuilding the replacement disk. I'm assuming I need to wait for the parity sync and rebuild to finish, then stop and start the array in maintenance mode, and check each disk individually. Does that sound correct? Any and all help appreciated! Also, any advise on best why to install containers and other features after rebuilding without a valid flash config backup. Thank you and hope everyone is having a good weekend. -Greg seine-syslog-20200112-2205.zip seine-diagnostics-20200112-1417.zip
  21. Hi, My USB died, corrupt and showing read only mode in Windows when trying to run a chkdsk. The last offline backup I can find is from 12/19/2019, and I have since added drives to the array. Suggestions on next steps? Any and all help greatly appreciated. 22 drive array, 2 parity disks, 1 SSD cache drive P.S. I had planned on setting up automated offline backs for it this weekend. C'est la vie. Thanks, Greg
  22. I love how easy it has been to transition all my old drives into one new array, that I can check from any browser in the house. The interface is user friendly, as is the community support. This first build has been very educational. I would love to see something like the Disk Location or Server Layout that could automatically populate the layout from backplanes from manufactures such as SuperMicro.
  23. Hi Folks, I'm a long time datahoarder, total UnRaid noob, working on completing my migration to UnRaid 6.8. If all goes to plan, I should have about 124TBs in the array with dual parity once completed. I'm migrating from a ragtag collection on external enclosures running under DriveBender in Windows 10 to a proper 24 bay SuperMicro server. I'm extremely happy with the decision to move to UnRaid, and wanted to shout out a general thanks to the community for all the great posts. You've answered just about every question I've had at some point somewhere in the forums previously. A special thanks to SpaceInvaderOne for his wonderful videos and content. I'm looking forward to opportunities to give back to the community. Regards, Greg