IndianaJoe1216

Members
  • Posts

    44
  • Joined

  • Last visited

About IndianaJoe1216

  • Birthday December 16

Converted

  • Gender
    Male

Recent Profile Visitors

1320 profile views

IndianaJoe1216's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. This solved it. I had a sata power splitter on those SSD's. They came in a 2 pack and I swapped to the 2nd one and haven't had issues since then. Thanks!
  2. Recently I have been having issues with my cache disks. All of my docker services stop and then I see a large number of btrfs errors on one of the disk logs. In order for it to be seen again I have to shut down the server open the lid disconnect the sata cables, reconnect and it is fine for a random amount of time before it happens again. It's usually at least a day. I have attached my diagnostics file. I am running a Dell R720 with all 8 bays filled with 20TB Exos drives. My cache disks are connected to the onboard sata ports (they are only 3gbps). I tried using an extra Mini-SAS port but it was causing issues with my disk in slot 0 for some reason and I am wondering if this is the same issue. It appears that the SSD is losing power thus losing connectivity, but am not skilled enough at reading the diag reports to be able to tell. Thank you! diagnostics-20230412-1516.zip
  3. So I think the UDMA CRC errors on both disks is due to me swapping the disks between the slots to check if it was an issue with that specific disk. The mini-sas cable does not break out to SATA on those drives There are 8 slots total and 4 on each SAS cable but those 2 SAS cables come together into a single connector and plug into the motherboard in a weird double sas connector. I really don't want to replace the backplane on this bad boy but I am starting to think that might be the issue.
  4. I recently swapped out the drives in my R720 for Exos 20TB drives and redid the configuration. When rebuilding the parity my parity disk 1 (slot 0 on the server) goes into a disabled state and shows 1024 errors in the errors column. It looks like the device is disconnecting randomly. I have reseated the PERC H310 (flashed to IT mode) as well as the mini-sas cables to no avail. I have attached my anonymized diagnostics file for reference. Any suggestions would be appreciated. nas-diagnostics-20230227-1300.zip
  5. I have have been using Lancache for quite awhile now and it's been working great for me. I even have it running through Pihole and that is working well as well. I am having one issue that I wanted to check on though. My initial download speeds are terrible. Typically less than 30mbps. My server has a 10G NIC to my main switch and I have 1gbps line speed. Disabling lancache, I can typically run in the 700+ mbps range. The lancache documentation says to add a second IP to your docker container but I have been unable to find any documentation on how to do that within Unraid. If someone could point me in the right direction I would really appreciate it.
  6. I am getting the same errors as well and have specified UUID as well as leaving it set to "All"
  7. I am having the same issue. It's pegging my server at 100% CPU usage since the update. Doesn't seem to be CPU specific as I am running a Ryzen 1800X while you're on a Xeon. @binhex Wondering if this is widespread and most haven't auto updated yet? Edit: @jademonkee I just did a force update of the container and it appears to have settled down.
  8. Want to bump this again to see if anyone else is seeing this. Binhex's docker is by far the best one out there but for some reason I CANNOT get autotools to work consistently. Anyone have any ideas on places to check?
  9. Going on a couple years with my Unraid build and it just keeps getting better. Keep up the good work!
  10. Parity rebuild in progress! Thanks for all your help! 25TB free is a BEAUTIFUL sight!
  11. There are 3 shares that are written to the most often (which are the only 3 on those drives) and when I started the first copy I went ahead and adjusted the shares to exclude the 2, 1.5TB disks. Yeah good call on add the other drives during the new config process, but I have 2 Exos enterprise drives on the way and I want to swap my parity drives out with those after I rebuild which unfortunately means rebuilding several more times so I am looking at ways to minimize the damage there, but that's still TBD. Wow I am an idiot. Since I am rebuilding anyways I could just do a new config and pull the parity drives at the same time since they are going to be rebuilt anyways. That would allow the new drives to be rebuilt as parity and I could add the current ones to the array no problem.
  12. Does using unbalance matter vs command line especially if I can verify? Also, once the copy is completed and validated what's the process for removing the disks? At this point the data on the disks is on the 1.5TB and the 10TB in the array so the parity would reflect that so I can't just pull the disks and leave those slots empty. That means I would have to do a new config and then rebuild parity?
  13. Unbalance actually has a verify stage after the copy is finished that runs a rsync -rcvPR -X. I was looking at the command syntax you sent over and since the 2nd line is -narcv wouldn't that just be a dry run because of the -n? Thanks for all the tips. It's showing me that I need to read the rsync syntax and I'm wanting to get more familiar with the Linux command line now as well. Without this I would have tanked my server somehow. :)
  14. I am actually copying one of the 1.5TB's over to the 10 right now using the Unbalance plugin as I've moved directories before that way and was successful. I was going to see how that goes before moving to command line that way I don't have to take the dockers down. Unbalance is using rsync -avPR -X so I'm doing some more digging into rsync to learn a little about the different flags available. That is extremely helpful. Common sense tells me that a move is faster than a copy, but that makes perfect sense when you throw in the extra drive activity.
  15. Good call on the copy! Not incredibly experienced with the Linux command. The data on the drive is all technically "Live" because it's mostly media for my Plex container and part of my caAppdata Backup lives there as well.