-
Posts
90 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by blurb2m
-
Hey @ich777, have you given this any further look lately?
-
Done and will update when it happens again. Thanks.
-
Hey everyone. Until 6.9, I could easily go 90 days or more without a hiccup or cause for a reboot. Since upgrading to 6.9 and now onto 6.9.2, it seems to last anywhere from 24 hours to a week before being completely unresponsive to web, ssh, etc and requiring a reboot via hardware button. I usually forget my diagnostics but including them this time from the start! All help appreciated. Thanks tower-diagnostics-20210412-0742.zip
-
I haven't. I don't think I realized that it was a plug-in, I must have had it for awhile. Uninstalled and trying to run now. That did it! It is now moving off cachepooltwo. Awesome. Thanks for the help!
-
tower-diagnostics-20210330-2304.zip
-
I have a cache pool setup with 2 SSDs and that is setup for docker appdata, etc. That one is working perfectly. I setup a second "Cachepooltwo" and I have it configured with a few shares, but it does not move the data off the cache onto the array when I invoke the mover. These are the shares I have setup: all_media - user cache pool: "yes" - select cache pool: "Cachepooltwo" - Included disk(s): "All" downloads - SAME SETTINGS AS ABOVE. Cachepooltwo is a 2TB nvme drive and it is sitting at 1.5TB full of all_media folders and download folders. - Downloads write to here and then it transfers the completed downloads to Plex's all_media folder on the cache drive. Kicking off the mover never shows any disk activity with Cachepooltwo. I thought I remembered this working soon after upgrading to 6.9 but I'm not sure if it has since 6.9.1. Any help would be greatly appreciated.
-
Updated to the latest this morning and I cannot load the history page. Getting these errors on the container logs: 2020-03-09 09:24:00 - ERROR :: CP Server Thread-13 : WebUI :: /home : Uncaught ReferenceError: page is not defined. (home:1016) 2020-03-09 09:24:00 - ERROR :: CP Server Thread-13 : WebUI :: /home : Uncaught ReferenceError: page is not defined. (home:1016) 2020-03-09 09:24:02 - ERROR :: CP Server Thread-13 : WebUI :: /history : Uncaught ReferenceError: page is not defined. (history_table.js:84)
-
I'm sorry, but I don't follow? I'm stable at 1.35V and 3066 MHz. I think if I wanted to try and achieve 3200, I would need to learn more about overclocking RAM.
-
I have been slowly increasing the clock speed and got up to 3066 stable on the 4 sticks. If I try going up to 3200, it freezes up on me after about 10 seconds into unRAID boot. I'll stick with 3066 for the time being until I can learn more about clocking these monsters.
-
I might try messing with the timings tomorrow to see if I can get higher speeds. Its back at 4 sticks @ 2133 for the time being. I was going by this for B-Die for TR builds. https://benzhaomin.github.io/bdiefinder/ This is the thread that lead me there. https://www.reddit.com/r/Amd/comments/8clf15/bdie_finder/dxgd1d9/
-
Changed Load XMP setting to XMP 2.0 Profile 1 (3200) and voltage to 1.35. Voltage auto-set when I changed to XMP 2.0 Profile 1. Unraid froze about halfway through bootup. Stopped at: sev command 0x4 timed out, disabling PSP SEV: failed to get status. Error: 0x0 So, I think when I changed to AUTO it had not updated yet the other day to show 2133 and 1.20v. It has been stable for a day at those lower clock settings. Not sure of what else to change to get it to run 4 sticks at 3200. What my limited knowledge has gathered so far: 2 sticks @ 3200 and 1.35V = stable 4 sticks @ 3200 and 1.35V = unstable 4 sticks @ 2133 and 1.20V = stable.
-
No I'm running 4 sticks. Let me take some new pictures from BIOS. That second picture might have been prior to switching profiles now that I think about it.
-
Would being quad channel make it report 2133? Should I move 2nd set to different slots?
-
BLUF: How to understand dmidecode to see if RAM is running at full clock speeds. (output of dmidecode --type 17 attached) Server has been running smoothly with 2x16GB sticks of GSkill Ripjaws (F4-3200C15D-32GVR) in dual channel mode with XMP Profile 1. ASRock X399 Taichi | TR4 2950X Windows VM was eating up a ton of RAM for Blue Iris NVR, so I decided to buy 2 more sticks. Popped them in the slots according to the x399 Taichi motherboard chart (D2,C2,B2,A2). BIOS is showing Quad Channel now and it was freezing like crazy on POST, so I went in and switched the Memory profile to "Auto" and it seems stable and showed DDR4-3200 for the speed. Does the information attached (txt) support what BIOS is reporting? Is there a better setting I should be using? Any help is greatly appreciated! tr4-ram-dmidecode.txt
-
I think I was on P20 20.00.07(?) It was 0.5 - 2.9% into a disk rebuild that it would kill the disk and disable it. Couple hours later the other 3 disks on that SAS port would get bazillions of read/write errors. They just stopped communicating through the card. Since moving to the 9207-8i, it has been flawless. Wroks out of the box with no flashing and uses PCIe 3.0 instead of 2.0 (not that it matters bandwidth-wise but seems more compatible with my x399 Taichi.
-
seems stable after changing the setting in BIOS.
-
I knowwww I should start a new thread... but @johnnie.black Thoughts on Fix Common Problems suggestion of: You have a Ryzen CPU, but Zenstates not installed
-
Mounted! You sir are a round candy with a hole in the middle.
-
Phase 1 - find and verify superblock... - block cache size set to 1512192 entries Phase 2 - using internal log - zero log... zero_log: head block 2356505 tail block 2356499 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_fdblocks 489295541, counted 490033385 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (2:2356527) is ahead of log (1:2). Format log to cycle 5. XFS_REPAIR Summary Wed Sep 26 11:52:28 2018 Phase Start End Duration Phase 1: 09/26 11:48:25 09/26 11:48:25 Phase 2: 09/26 11:48:25 09/26 11:49:42 1 minute, 17 seconds Phase 3: 09/26 11:49:42 09/26 11:49:46 4 seconds Phase 4: 09/26 11:49:46 09/26 11:49:46 Phase 5: 09/26 11:49:46 09/26 11:49:46 Phase 6: 09/26 11:49:46 09/26 11:49:50 4 seconds Phase 7: 09/26 11:49:50 09/26 11:49:50 Total run time: 1 minute, 25 seconds done
-
@johnnie.black So, system has been stable. Just updated again to 6.6.0 and everything seems ok except for Disk4 status on "Main" shows 'Unmountable: No file system'. This is after the successful rebuild. Thoughts?
-
Happy to report 7.5 hours later that it completed with zero errors! Disk4 is fully operational and its the old 3TB disk. The speed picked up a bit and went over 100MB/s but averaged around 80 - 85MB/s. Not sure why History shows over 280MB/s... (I think that would have finished in 3 hours).
-
@johnnie.black got the new HBA card in and it is rebuilding. I did change the PCIe Mode to Auto instead of Gen2 (options are Auto, Gen1, Gen2) in mobo BIOS since this card is PCIe 3.0. Rebuilding at 2.9 - 41MB/sec though.... (avg about 30MB/sec)
-
Sorry to hear that but glad you are stable again. I'm really hoping that it is just the HBA card. New one comes Tuesday and I was still running into issues rolling back to 6.5.3.
-
@johnnie.black thanks for all the help. I'll update Tuesday when the LSI Logic SAS 9207-8i Storage Controller LSI00301 comes in.
-
Which is a better solution. Run stay down two drives until a new one can come in or run it with one array drive down and one cache drive? Only have 8 Mobo ports until new one can arrive