minuzle

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by minuzle

  1. Thank you for the help. I now have a drive rebuilding that is mountable. Very much appreciated.
  2. Ok I'll do that, I did just reboot about 45 minutes ago but I'll do it again and post new diagnostics. One other thing to note is my last few monthly parity checks have resulted in corrected errors. I assume that is because I have disk11 and disk19 starting to produce errors which was the reason I purchased all these drives originally. Thanks
  3. I have also since downgraded to 6.11.1 and the issue persisted.
  4. I rebooted before I saw your msg, I do have a diagnostics from last night I can post. Looking through dmesg I found this so now I'm leaning that maybe I have some bad ram, fix common problems is logging machine check errors. [ 850.788000] XFS (md9): Mounting V5 Filesystem [ 851.497628] XFS (md9): Metadata CRC error detected at xfs_agi_read_verify+0x85/0xfa [xfs], xfs_agi block 0x1b4a4cc3a [ 851.498140] XFS (md9): Unmount and run xfs_repair [ 851.498575] XFS (md9): First 128 bytes of corrupted metadata buffer: [ 851.499072] 00000000: 58 41 47 49 00 00 00 01 00 00 00 05 0d 26 27 2c XAGI.........&', [ 851.499534] 00000010: 00 00 00 40 00 00 00 01 00 00 00 01 00 00 00 1c ...@............ [ 851.500074] 00000020: 00 00 00 60 ff ff ff ff ff ff ff ff ff ff ff ff ...`............ [ 851.500542] 00000030: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ [ 851.501072] 00000040: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ [ 851.501506] 00000050: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ [ 851.502016] 00000060: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ [ 851.502472] 00000070: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ [ 851.502999] XFS (md9): metadata I/O error in "xfs_read_agi+0xce/0x120 [xfs]" at daddr 0x1b4a4cc3a len 1 error 74 [ 851.503586] XFS (md9): Error -117 reserving per-AG metadata reserve pool. [ 851.503593] XFS (md9): Corruption of in-memory data (0x8) detected at xfs_fs_reserve_ag_blocks+0xa7/0xb8 [xfs] (fs/xfs/xfs_fsops.c:569). Shutting down filesystem. [ 851.504641] XFS (md9): Please unmount the filesystem and rectify the problem(s) [ 851.505289] XFS (md9): Ending clean mount [ 851.505312] XFS (md9): Error -5 reserving per-AG metadata reserve pool. mediasrv-diagnostics-20221108-1618.zip
  5. If I use unassigned devices to format it anything but XFS I can mount it, if I select XFS it fails to mount. Nov 9 07:03:17 MediaSrv kernel: XFS (sdw1): Corruption warning: Metadata has LSN (1:915971) ahead of current LSN (1:915854). Please unmount and run xfs_repair (>= v4.3) to resolve. Nov 9 07:03:17 MediaSrv kernel: XFS (sdw1): log mount/recovery failed: error -22 Nov 9 07:03:17 MediaSrv kernel: XFS (sdw1): log mount failed Nov 9 07:03:17 MediaSrv unassigned.devices: Mount of 'sdw1' failed: 'mount: /mnt/disks/VGK4J81K: wrong fs type, bad option, bad superblock on /dev/sdw1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. ' I did try running xfs_repair but it just seems to run forever looking for a secondary superblock.
  6. I am trying to upgrade 4TB drives to 8TB drives, I purchased 8, 2 were done on 6.11.1 and the others were all tried on 6.11.2. I'm now on 6.11.3 and they all still say that when trying to rebuild. Even after I've put my original drive in, created a new config, validated parity etc. The original drives mount just fine as soon as I put one of these other drives in I get the issue. I may just go down to 6.11.1 and see what happens.
  7. It appears I still have to preclear all these disks as they come up unmountable even on 6.11.3
  8. After days of fighting this issue as I'm trying to upgrade 8 different drives I came across this thread and noticed 6.11.3 was released today. I can now stop banging my head on my desk! Version 6.11.3 2022-11-08 This release is focused on bug fixes and minor improvements. In particular, we need to revert a base library due to a bug which prevents formatting devices >2TB in size. Management Reverted 'libpopt.so.0.0.1' to workaround 'sgdisk' bug used to format devices larger than 2TB.
  9. I'm no specialist but the BUS speed is faster on the Xeon, more cores / threads, more memory bandwidth, quad channel vs dual channel. If you added a second cpu I would imagine it would be better overall. I just went to dual e5-2690v2 and I'm completely happy with the performance.
  10. For anyone wondering I did get 2 of these installed on my board, very tight fit but overall no change in temperature and extremely quiet. Board: Supermicro X9DRL-iF
  11. Man these are awesome! I wish I knew more about css and js to be able to make my own.
  12. I just purchased this a couple weeks ago and I couldn't be more happy with the performance. I did purchase different ram though as I wanted more than 16gb. Supermicro X9DRL-iF ATX 2x E5-2690 V2 16GB IPMI 3x PCIE 3.0 x8 w/ Cooler Fan
  13. So I have a dual CPU E5-2690 V2 setup running SuperMicro SNK-P0048AP4 CPU coolers. I must say, they work fabulous but are a little noisy for my taste. Has anyone tried Noctua NH-D9DX i4? I'm skeptical as the max fan speed is 3000RPM and my current setup the fans are running at 5250RPM with the IPMI Fan Speed setting set to Optimal. The SuperMicro fan is 60mm and the Noctua is 92mm. Thank you in advance minuzle
  14. minuzle

    Ram question

    Thanks for the help! I will probably just use the 4 I just purchased now then, I would have purchased more but he only had 4 available. Do you feel like 2 per cpu will be enough?
  15. minuzle

    Ram question

    It is 2x8gb, I did just order some 1866 off ebay (4x8gb). I'm not sure what the clock is on the ram coming with the board as the listing description didn't say, I was more in it for the motherboard and CPU's. My guess is if I mix the ram they will be throttled down to whatever the slowest speed is.
  16. minuzle

    Ram question

    I just purchased a used Supermicro X9DRL-iF that came with two E5-2690 v2's. It has 16gb of ram (they didn't say the speed I assume DDR3 1866). I have a 20 drive array with 2x 512gb SSD's in a cache pool. Will 16 GB be enough or should I look at adding some more? I don't run any VM's I do have a few dockers though (medusa, pihole, plex, sab, tautulli). Thank you in advance minuzle
  17. I have an older MSI with Z170A chipset and an i5-6600k CPU that I want to replace in my server. I have 20 drives in my server currently and looking to upgrade from my older Opteron 2346 HE setup running 3x AOC-SASLP-MV8. Would those Dell cards you listed work? The reason for my upgrade is if someone is watching a 4k stream from plex remotely I have the PMS dumb it down a bit so that there isn't buffering issues, but it almost won't play at all so I assume the CPU is the bottleneck because I have gigabit internet. Streaming to a 4k TV remotely works fine because there is no transcoding happening. I assume going from SATA 3gbps to 6gbps and the faster cpu would help if not fix the problem? I've searched eBay for 3 days trying to find another server board and cpu combo but tbh I have no idea what I'm looking at.
  18. Well I'm not sure what changed but I tried changing the Extra Parameters to what yours says because mine is defaulted to --cap-add=NET_ADMIN It didn't work so I upgraded to rc3 from rc2 on 6.5.1 did a clean boot and when I went to go login to the PiHole admin page it listed domains in the blocklist. I'm not sure what it was as I rebooted my server many times yesterday but I appreciate your effort. Thanks
  19. I get connection refused every time I run the block lists update. I'm running unraid 6.5 and have PiHole running on it's own IP. I've forwarded my dhcp servers DNS ip to the PiHole ip. Everything seems to be functioning except it isn't blocking anything. Thoughts?
  20. This is what it says now: I've included the diagnostics if that will help Thanks minuzle mediasrv-diagnostics-20170204-1354.zip
  21. Ok, my parity check ends in about 45 minutes. I'll upgrade at that point and see if the error is still there. Thanks minuzle
  22. 6.2.4 currently but I will upgrade to 6.3 later today.
  23. I've recently moved all my drives into a server case. I noticed that I'm getting an error on the Docker page under "Docker Images" I've tried to search for an answer but I haven't had any luck. Any ideas? Thanks minuzle
  24. I ended up just using Pydio