dave_m

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by dave_m

  1. I started getting these errors after upgrading to Windows 11, or at least that's when I noticed them. Changing the SMB settings to enable multi channel and disable enhanced macOS interoperability seems to have resolved it for me.
  2. Now that I'm finally upgrading all my drives after seeing a catastrophic server failure at work (5 out of 8 drives in two servers), here are the first drives removed from mine. They had data and were being used until this last week Samsung HD103SJ 8/2010 Samsung HD103SI (undated) 2 Seagate Barracuda Green 8/2011 2 Hitachi HDS7230 8/2011 and Refurbished 2/2011 Samsung HD204UI 3/2011 3 WD15EARX Recertified from 2012 3 WD20EZRX Recertified from 2013 Bunch of other random (and some recertified) drives to go after the preclearing finishes. I'm impressed with how long it all lasted, most were the cheapest green drives I could find, and they survived being taken out and put in storage for half a year during a move in 2016 as well. (After adding some drive dates, I realized they were all older than I thought, as I expected to see more dates from 2013/2014)
  3. I upgraded from 6.6.7 to 6.7 at the same time I made some other changes, and the system would reliably stop responding within 1 to 8 hours. One of the other changes was replacing a failing drive and rebuilding the array, so I eventually backed out every other change but that one. Each time I would bring up the server, it would try and rebuild the replaced drive, but stop responding before it completed. It wasn't crashing, as the lights were still on, but there was no disk activity. I tried to rebuild the drive at least 8 times on 6.7, but it never completed. I finally rolled back to 6.6.7 last night and the rebuild completed and the server is running normally. There were never any errors reported on 6.7 and one of the rebuilds was with all plugins disabled. It passed Memtest multiple times, and the VM and docker apps were not running during the rebuild. Here's the build details, the hardware isn't especially new: M/B: ASRock - 970 Extreme4 CPU: AMD FX-8320E RAM: 16GB DDR3 1600 Case: Norco 4224 Controllers: LSI SAS1068E & SAS2008 Drives: 16 data + dual parity, cache + 2 outside array for docker / vm Apps: MythTV VM and Plex docker NICs: onboard Realtek RTL8111E + PCIe BCM5721 (bonded), and PCI Intel PRO/1000 (vm) I waited before rolling back as I had initially added another SAS1068E that might be bad and accidentally reset the BIOS settings, but the system hangs continued after correcting both of those.
  4. I am running 6.3.5 with dual parity have an empty disk that is assigned to the array and already formatted with RFS...what's the easiest way to switch it to XFS? The disk was being used, but it was trivial enough to move the files off of it.
  5. I just wanted to post up a success story about how well Unraid worked for me again. I’ve been an unraid user since 2011, rebuilt in 2013 with the current Norco RPC-4224 case. I moved in September of 2016, taking the drives out of the server and placing it all in a storage unit for 8 months. Then it was moved to the new house but not reassembled until yesterday. A few minor hiccups with some cheap (but not needed) PCIE controllers, but I was able to get it up and running on 6.1.9 without any real drama. Today it completed the parity check with no errors, I’m listening to music on Plex again, upgraded to 6.3.5 and am currently adding the second parity disk, without any hiccups. Still a happy customer :->
  6. I found the updated Global SMART Settings section, but unchecking the 188 Command time-out box and then clicking Apply doesn't result in actually saving the new setting. When the page reloads the box is checked again. The per disk setting has the same behavior.
  7. I see this same behavior as well, regardless of which browser I use. However, it might be related to the array disks being spun down. If the majority of the disks in my system are spun down, it's sometimes impossible to get the preclear plugin popup to appear. If the disks are spun up, then it's usually only one or two clicks to get the popup.
  8. Thanks for the update, sounds like Maintenance Mode is the way to go.
  9. That all makes sense, but I'm wondering if the system being in Maintenance mode is a suggestion or a requirement. With two drives to clear it would be nice to keep it running, and a cache drive will keep it in basically read-only mode if the mover is temporarily disabled...
  10. Stock unraid, no plugins at all. New motherboard is Asrock 970 extreme4 with 16GB of memory. Both use a Realtek onboard NIC, so it could be that driver as well. I should have a PCI NIC around somewhere, maybe I'll test that out.
  11. I'm not sure if anyone else has run into this issue, but I swapped out the motherboard and memory and the 5.0.x series still has slower write speed. The 6 beta2 and beta5a have normal write speeds.
  12. It will work on 6.0 if you comment out the "ulimit -v 5000" line. Use this suggestion at your own risk, there's probably a better solution than completely commenting the line out.
  13. On the shares page, the free space is including the cache drive, if that share has files on the cache drive. So in my case, several shares appear to have 2TB more free than they really do.
  14. And the speed is back to normal in 6.0 beta 1 Not sure what to do here. Assume 5.0.x is just a bad match for my hardware and use 5.0RC10 or 6.0? syslog-6-beta1.zip
  15. I tried 5.0.4 with the mem=4095M option, and no change. I've also tried swapping out the cache drive for a different one, and switching the controller it was attached to, and no change after either of those.
  16. My unraid server is mostly a set it up and then let it run, I don't upgrade it very often. I used 5.0b12a for quite a while and then went to 5.0RC10 last year, both of which were relatively stable. I finally upgraded to 5.0.4, and while everything seemed fine initially, the write speed went down quite a bit. On RC10, writing to the cache drive directly reported speeds above 100MB/s, writing to the cache drive through a user share reported speeds of 60MB/s and writing to the 7200RPM parity protected drive reported speeds of 40MB/s. On 5.0.2 & 5.0.4, writing to the cache drive directly reported speeds of just over 30MB/s, writing to the cache drive through a user share reported speeds of 20MB/s and writing to the 7200RPM parity protected drive reported speeds of under 15MB/s. Download speeds from the unraid server were not affected. Write speeds to the unraid server were tested over SMB and from different Windows 7 and Ubuntu linux machines. Changing the CPU scaling governor did not make a difference on 5.0.x, and nothing else is running on the server other than unraid. I disabled unmenu for this testing and was copying various 1GB VOB files for the test. The motherboard is an ECS A885GM-A2, 4GB of memory, BR10i & M1015 PCIE cards and AMD 250u CPU. The CPU is low powered, but it's not showing any unusual load in 5.0.x. The network card is the onboard Realtek 8111DL. EDIT: If I go back to 5.0RC10, the speeds are normal. The slowdown is only on my main server, my secondary "basic" server is fine on 5.0.4, even though it's using older and slower hardware. syslog-RC10.zip syslog-504.zip
  17. No trouble downloading and upgraded both servers from 5.0 RC10 to 5.0.4. One server was fine, but the write performance dropped on the other.
  18. I've been using that TPLink switch for a year with no problems.
  19. Copied from dgaschk's signature: Revert to stock system: 1. Rename the /boot(flash)/config/plugins directory. 2. Rename boot(flash)/extra/. 3. Use the stock go file (boot(flash)/config/go). Stock go file: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp &
  20. Try running memtest to rule out a memory problem. Also, is your server accessible from the public Internet? It appears you're getting a public ip address through DHCP. The syslog doesn't have any info on the error that's causing the array to go down. Could you try one of these steps to see if you can can capture a little more information.
  21. PCI Video card off ebay, I got an ATI Rage but anything should work. I'm not planning on leaving it in the ESXI box after it's all set up.
  22. I'm using an Althon 250u (2 x 1.6GHz) and it's overkill for simply streaming HD content. I do use the -b -r -w options if preclearing a drive though to make sure it doesn't impact other processes.
  23. Watch out for HPA with any Gigabyte motherboards. Also, the Intel NIC may be optional if the motherboard's onboard NIC works.
  24. The syslog covers several days and shows some drive spin down entries. In the most recent portion of the log the drives would not have spun down because a parity check was running. I would reboot the server and then check back after an hour of no activity to see if the drives are spun down and that syslog might be more useful.
  25. The LSI cards worked in beta12a, but then had problems until one of the middle release candidates. There are no problems for the LSI controllers for either RC10 or RC11.