Jump to content

dave_m

Members
  • Posts

    99
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dave_m's Achievements

Apprentice

Apprentice (3/14)

4

Reputation

  1. I started getting these errors after upgrading to Windows 11, or at least that's when I noticed them. Changing the SMB settings to enable multi channel and disable enhanced macOS interoperability seems to have resolved it for me.
  2. Now that I'm finally upgrading all my drives after seeing a catastrophic server failure at work (5 out of 8 drives in two servers), here are the first drives removed from mine. They had data and were being used until this last week Samsung HD103SJ 8/2010 Samsung HD103SI (undated) 2 Seagate Barracuda Green 8/2011 2 Hitachi HDS7230 8/2011 and Refurbished 2/2011 Samsung HD204UI 3/2011 3 WD15EARX Recertified from 2012 3 WD20EZRX Recertified from 2013 Bunch of other random (and some recertified) drives to go after the preclearing finishes. I'm impressed with how long it all lasted, most were the cheapest green drives I could find, and they survived being taken out and put in storage for half a year during a move in 2016 as well. (After adding some drive dates, I realized they were all older than I thought, as I expected to see more dates from 2013/2014)
  3. I upgraded from 6.6.7 to 6.7 at the same time I made some other changes, and the system would reliably stop responding within 1 to 8 hours. One of the other changes was replacing a failing drive and rebuilding the array, so I eventually backed out every other change but that one. Each time I would bring up the server, it would try and rebuild the replaced drive, but stop responding before it completed. It wasn't crashing, as the lights were still on, but there was no disk activity. I tried to rebuild the drive at least 8 times on 6.7, but it never completed. I finally rolled back to 6.6.7 last night and the rebuild completed and the server is running normally. There were never any errors reported on 6.7 and one of the rebuilds was with all plugins disabled. It passed Memtest multiple times, and the VM and docker apps were not running during the rebuild. Here's the build details, the hardware isn't especially new: M/B: ASRock - 970 Extreme4 CPU: AMD FX-8320E RAM: 16GB DDR3 1600 Case: Norco 4224 Controllers: LSI SAS1068E & SAS2008 Drives: 16 data + dual parity, cache + 2 outside array for docker / vm Apps: MythTV VM and Plex docker NICs: onboard Realtek RTL8111E + PCIe BCM5721 (bonded), and PCI Intel PRO/1000 (vm) I waited before rolling back as I had initially added another SAS1068E that might be bad and accidentally reset the BIOS settings, but the system hangs continued after correcting both of those.
  4. I am running 6.3.5 with dual parity have an empty disk that is assigned to the array and already formatted with RFS...what's the easiest way to switch it to XFS? The disk was being used, but it was trivial enough to move the files off of it.
  5. I see this same behavior as well, regardless of which browser I use. However, it might be related to the array disks being spun down. If the majority of the disks in my system are spun down, it's sometimes impossible to get the preclear plugin popup to appear. If the disks are spun up, then it's usually only one or two clicks to get the popup.
  6. It will work on 6.0 if you comment out the "ulimit -v 5000" line. Use this suggestion at your own risk, there's probably a better solution than completely commenting the line out.
  7. Yes, most people use molex splitters
  8. That sounds like the problem spinning up the drives all the LSI owners had with beta 13 & 14, which is why beta12a is recommended. Which version did you have problems with?
  9. I'm pretty sure the BR10i doesn't support anything larger than 2TB. I think the BR10i 3TB support refers to allowing multiple smaller drives to appear as one 3TB drive, which isn't really applicable to most unraid usage. That said, I have the BR10i and it works great with my 2TB drives and beta12a.
  10. Just realized I'm probably pushing the limits of my CX430 with 9 green drives and 2 7200RPM drives, especially as I have 1 more of each ready to be added. The CX500 that was supposed to replace it may be DOA, the main server does absolutely nothing when it's attached and the test server seems quite unreliable with it. Luckily the server doesn't have anything else drawing a lot of power, just a Sempron CPU, PCI NIC and a IBM BR10i controller. Assuming it's still a while until I can get the new PSU, is it a problem to keep using the CX430? As long as the server is running is running and I don't try and reboot it too many times it shouldn't be near it's max capacity, right?
  11. I've kept a flash drive around with SystemRescueCD on it for a while, it's useful for little tasks like this or disk cloning / partitioning.
  12. As far as the "Failed to initialize PAL" errors go, it may be worth trying both the dos and linux versions on the same motherboard. I was able to flash yesterday with the dos (1.28) version, but today it gave me a "Failed to initialize PAL" error, while the linux version (1.24) still worked.
  13. Successfully flashed a Br10i with the provided batch files. I had the same issues running it from unraid as sacretagent did here, but the dos option worked. Now to test it out running one of the 5.0 betas, 12a looks like the better option as 13 seems to have problems with the LSI controllers. Edit: Running sasflash -listall from a different linux distribution (system rescue cd) did work though.
  14. After you split the 8pin apart, if you look at the two sides of the 4+4 connector, they aren't exactly the same. I'd make sure you had the correct one connected to the motherboard, as I've had problems when the wrong one was connected.
  15. This helped rescue over 500GB of PVR recordings after an errant "rm ..." command. I'd say it took about a day for the reiserfsck to finish on a 2TB drive with 1GB of RAM. I'd say I was able to recover over 90% of what I deleted, and most of it was even titled correctly.
×
×
  • Create New...