Homerr

Members
  • Posts

    75
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Homerr's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. As an update I got a "Genuine LSI 9207-8e SAS HBA 6Gbps PCI-E 3.0 P20 IT mode for ZFS FreeNAS unRAID" from ArtofServer. The upgraded fans are also great, this has all been running for a few months now without issue.
  2. Update - So, I guess I was just looking for a JBOD case. It took a bit more looking at what was out there and from browsing Supermicro manuals it looked possible to do what I wanted. I ended up getting a "SuperMicro CSE-826 12x LFF 2U Barebone W/ Trays 2X PWS-920P-SQ" off of ebay. It came with a SAS826A backplane, and per the video above on Supermicro backplanes I also got a BPN-SAS2-826EL1 and that is now in the server instead. It took a couple of weeks to get the CSE-PTJBOD-CB2 power board. The case came with a couple of SFF-8087 cables, those go to a "CableDeconn Dual Mini SAS SFF-8088 to SAS36P SFF-8087 Adapter in PCI Bracket" (Amazon) and then a 2m long SAS SFF-8088 to SFF-8088 Cable. I also got a Supermicro MCP-290-00053-0N rail kit for my rack. I have fired up the case and the 3 case fans absolutely HOWL. The power supply is actually very quiet, as advertised. I have ordered a FAN-0104L4 replacement fan that seems to be the go-to replacement. If I like it I'll order 2 more to quiet it down. So, I built a JBOD case. I had an Adaptec card that I had hoped to use that just errors and won't boot to it's BIOS when in the ASRock X370M Pro4 in the Norco. It was in the x16(4x electrical) slot that is highlighted in the picture below, I removed the ASUS video card in that pic. What I could still use help on (from my last post) is picking out the HBA card - 3. The HBA itself, with at least one SFF-8088 external connector. Sounds like it needs to be at least PCIe2.0, SAS 2008. (Could this be a newer PCI33.0 spec x8 card instead? Would it just run at the slower 16gbit/s speed? Thinking a couple years down the road if I replace the motherboard/cpu.) Any suggestions for a card? I do like the Art of Server and would buy something from him on ebay, but am open to other sources. I just would like to have an opinion or two on what to get. CSE-826 case: CSE-826 on top of my Norco RPC-4020 (excuse the dust, I cleaned it later) Inside the Norco case, Asus video card removed. This is the x16(x4) slot.
  3. Based on what you all have said I did some revised searching and think I've found more what I'm looking for - Supermicro 826 chassis (12 bays) with a BPN-SAS2-826EL1 or BPN-SAS3-826EL1 backplane. There may be other chassis, but this seems fairly common and parts are available and are on ebay for around $250-300. Art of Server has done a nice explainer on this. Using this second video above it seems a single 8087 cable would be enough using his 2-5-9 rule with 3.5" SATA drives. With a SAS2 backplane, 12 x 2gbit = 24gbit. (At 15 min mark in 2nd video.) Getting to a PCIe HBA to my motherboard - My motherboard has a 1 x PCI Express 2.0 x16 Slot (PCIE3: x4 mode) slot available, so x4 electrical. (I'm using the primary PCI Express 3.0 x16 Slot (PCIE2: x16 mode) for my LSI-9201-16i card for the array in the Norco RPC-4020 chassis. Motherboard is a Asrock X370M Pro4 with a Ryzen 5 3600 (Pinnacle Peak) cpu.) Continuing with his math (15:45 in video), PCIe2.0 = 500mb/s per lane x 4 lanes = 2,000mb/s or x8 to get gigabits = 16 gbit/s total bandwidth. This is a bottleneck, but I'll also never have 12 drives going at max other than parity checks. So for Plex this seems like it should be fine. Does this sound okay? So, to not make this too hacky it seems I would need, in order, starting from the backplane - 1. SFF-8087 cable to a ?? (unknown) rear panel PCI slot connector. (Is this cable directional?) 2. The ?? PCI slot connector cable to the HBA in the unraid server. (Would prefer this inermediate bit so I can unplug the SM 826 case for maintenance instead just a long 8087 cable fron the backplane to HBA.) 3. The HBA itself, with at least one external connector. Sounds like it needs to be at least PCIe2.0, SAS 2008. (Could this be a newer PCI33.0 spec x8 card instead? Would it just run at the slower 16gbit/s speed? Thinking a couple years down the road if I replace the motherboard/cpu.) If this all looks correct so far can anyone help me with part numbers/links for the 3 parts listed above?
  4. I searched a bit here and the web and don't quite see if this is a thing - SAS Expander case with a backplane. Maybe I'm using the wrong terminology for my searches. I'm looking to expand my server with more bays - 20 of 20 are full now. I have a old Norco RPC-4020 which is in a rack and I'm happy with it in general. I have a x16 slot open (x4 electrical) and am wondering if there is a setup with an expander card, cable to a second case (rackmount if possible) with 8? +/- drive bays (hot swap/front accessible a plus). A sub question would be can x4 electrical on the PCI-e slot support 4 or 8 drives? Plex is the main use, 98% one user connected. I have a mix of 10/12/16tb drives with 2x16tb parity drives, but would like more storage space without ditching the 10tb drives. Does this exist and can it be done for hundreds $ instead of thousands $$$ ?
  5. I just upgraded my two parity drives to Toshiba MG08's and did some testing. I previously had the parity and cache on the onboard SATA ports. I tried other combos and now have everything on a single cable (channel) on the 9201. I was surprised that the staggered arrangement wasn't the quickest. I took the rates at 10 minutes in since it seemed like the system had settled into a steady state more or less by then. Motherboard is an ASRock X370M Pro4 and cpu is a Ryzen 5 3600.
  6. Thenks Squid! I ran it, no errors and restarted in normal array mode and everything looks to be as it should.
  7. Disk is XFS, I restarted in Maintenance mode and ran the check on the disk, result below. I'm at step 2 of Repairing a File System - "Add any additional parameters to the Options field required that are suggested from the check phase. If not sure then ask in the forum. The Help build into the GUI can provide guidance on what options might be applicable." Is it recommended to add any parameters? (Doing this via GUI, btw.) Step 4 notes a -L option. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (7,74328029-74353290) multiply claimed by cnt space tree, state - 2 block (7,74169449-74170249) multiply claimed by cnt space tree, state - 2 block (7,74169449-74170249) multiply claimed by cnt space tree, state - 2 invalid start block 0 in record 51 of cnt btree block 7/2 invalid start block 0 in record 52 of cnt btree block 7/2 invalid start block 0 in record 53 of cnt btree block 7/2 invalid start block 0 in record 54 of cnt btree block 7/2 invalid start block 0 in record 55 of cnt btree block 7/2 invalid start block 0 in record 56 of cnt btree block 7/2 invalid start block 0 in record 57 of cnt btree block 7/2 invalid start block 0 in record 58 of cnt btree block 7/2 invalid start block 0 in record 59 of cnt btree block 7/2 agf_freeblks 207015525, counted 194421230 in ag 7 agi unlinked bucket 39 is 130200423 in ag 7 (inode=15162585959) sb_ifree 8809, counted 8862 sb_fdblocks 576670704, counted 604659013 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 data fork in ino 15040730579 claims free block 1953203817 data fork in ino 15162585949 claims free block 1953077194 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 6 - agno = 1 - agno = 7 - agno = 5 - agno = 4 - agno = 8 - agno = 9 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 15162585959, would move to lost+found Phase 7 - verify link counts... would have reset inode 15162585959 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting.
  8. We had a power outage in our neighborhood Friday. I restarted the server that night and it ran a parity check which ended last night, it seemed everything was okay. I was then in the case late last night replacing a case fan (routine, not related to the power outage) and booted the server back up. It was late and I thought everything was okay initially. This morning I noticed that Disk 2 in unmountable. I did shutdown and reseat Disk 2's drive cage and unplug/plug the SATA cable that goes to it's backplane. I did not replace the cable as of yet. This drive is one of the 4 in my system connected to the motherboard via SATA, the others are on an LSI 9201-16i card. I do have a 12tb disk that I can use if needed and I have one spare port on the 16i setup. I did do some searching on this topic and there were serveral paths taken, I'm not sure exactly what I should do here. I don't really know the commands in Linux but can follow directions. Can anyone help me out here? unraid-diagnostics-20211205-0800.zip
  9. I think I finally figured the increased free space issue out. I am running the Recycle Bin app and I think it dumped all the deleted files. I'm 99% sure this was the ~12tb difference that happened.
  10. I did end up swapping the second 12tb Parity 2 drive in. Then I had a friend that has more experience with Linux based stuff help me out to get all 3 dockers back up and running. The settings were all still in place and fine. We also moved the errant files off the cache drive. So everything is back as it was. The 10-12tb of extra free space after doing the first parity drive swap is still a bit of a mystery to me. Does parity somehow have this much overhead on a 136tb pool?
  11. Welp, I was reading a thread on reddit of transfer the parity data vs. just yank the parity drive and rebuild. I chose the latter, but I now see that missing form that discussion was turning off dockers. 1. Are the dockers comepletely borked? It's not the end of the world, but if they can be recovered that would be nice. 2. Any insights on why the free space on the server went up roughly 12tb, which happens to be the replacement parity drive size? Besides the docker issue I plan to upgrade the Parity 2 drive to a 12tb one. It seems like I should do that before resetting up the dockers - assuming they are not recoverable. TIA for any help on direction.
  12. I run two parity drives (10tb) and swapped out 'Parity' (not 'Parity 2') with a 12tb drive and let it rebuild. I also have 15 data drives and a SSD cache. It has just finished and several things have happened. Free drive space went from ~8tb to 20tb. Alarmingly, three Dockers have disappeard - Crashplan, Krusader, and Plex. Fix Common Problems has sent a notice: I have only tried one reboot since this has happened. Have I borked my Docker setup or can anything be recovered? unraid-syslog-20211018-0128.zip
  13. I'm having similar issues to jasonmav and scud133b above with CRASHPLAN_SRV_MAX_MEM, see image below of pop-ups. I tried editing the docker settings for memory to 2048M and 4096M to no avail (I have 16gb ram fwiw). I only get the red X on the webui header now (second image is butted up under the first image with the 3 memory warning boxes). In doing the above I also broke the memory setting and I hope one of you can post a screenshot/fix. The last time I edited the docker memory settings I selected 'Remove' thinking it was to remove the value, not the actual tool. I see how to add back in the variable, I just need to see what the memory tool settings are.
  14. Thanks so much johnnie for noting this. I think this was the real culprit all along. In 25 years of computer building I've never had a standard SATA go bad. Everything is back in the array and a parity check showed no errors and it's been running a week now.
  15. Thanks! I just finished consolidating some backup files to free up a 10tb drive and swapped that in as a replacement for Disk 8. It's rebuilding now. I do as you suggested and check it against the files. And then that drive will replace Disk 13.