phbigred

Members
  • Posts

    265
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by phbigred

  1. Both are going to take time. Turn off anything that could be writing to the array. Pop out the parity and get that swapped. Order another drive to have on standby. Worst case you have a parity drive you can use for recovery should things go belly up. That's the route I would suggest.
  2. If you have a prompt login and type diagostics just to capture a log. First step shut down and reseat all cables. The drive dropping off may have had a similar problem. You mention an lsi card is it seated properly in the mini sas connection? May be worth finding out if your motherboard has Marvell controllers. Having this much weirdness at once makes me think whatever the data connections are being driven from is having a problem with unraid. Rerun the data cables/swap. I'll leave it for the more experienced to give more ideas. Also to be sure you aren't having any brownouts correct? Hopefully using a UPS of some sort.
  3. https://wiki.unraid.net/Shrink_array You can shrink your array, I've done it as few times. If you are looking to reduce the count of disks go with a single parity. Remember it has to be = or greater than the largest drive. Word to the wise... If you are thinking of encrypting later on, you may want to do it from the beginning as in-place encryption isn't available yet. Also confirm your nvme drives aren't using a Marvell controller, known issue with Linux.
  4. Try the commands in XML mode listed by testdasi for x16 pci-e. But overall bump to 2 vCPUs minimum if you can afford to.
  5. Maybe related, maybe not. Had Plex DB issues as well on cache drive. Flipped to XFS from btrfs for cache as I wasn't using the pool feature and haven't had issues since. Noticed cache corruption with my VMs too becoming unable to backup. Just my 2 cents.
  6. Sounds similar to a problem SpaceInvader One had. Might be worth attempting a move to QT35 instead of i440fx machine type. I had a similar problem with my RX580 passthrough.
  7. Been a user since 2014 I believe. Got started with RCing 6.0. Original plan was to createna data location that could house my family's vast picture collection off my main rig. It has morphed into so much more. Virtualization, docker, etc. Own 2 licenses and recommend to all my coworkers.
  8. Agreed if you have coax and don't want to run Ethernet, Moca is the most solid way to go, if you have satellite go DECA as it uses the other end of the frequency range flipped from Moca. It's as solid as a Ethernet cable and have been using them since Moca 1.0 days. Solid 970Mb/s with these 2.0 devices, recommend actiontec or Motorola ones.
  9. It's likely a Marvell controller like mine. I quit using it with this 6.7 branch due to the flakiness.
  10. This isn't VM related this is a general UI bug. I pin my CPUs and it happens to ones assigned to unraid outside of VM isolation.
  11. Anyone in this thread having issues with RC8? My issue still hasn't tripped causing the phantom CPU spikes in GUI.
  12. Are we just seeing a theme with Ryzen? Might be an unidentified bug.
  13. It's a good point but this is new with the 6.7 branch for me. What you recommend is probably a good standard process to do in general.
  14. Rebooted fine though ssh. Waiting for it to show up again. Triggered a parity check though. Any idea which process/task is causing this to trigger? Stinks having to reboot to stop my array to make a disk setting level adjustment. Especially when the btrfs is on one single cache drive and a drive that us in unassigned devices. Thoughts or ideas how to stop this from occuring? This issue is new in 6.7 branch for me.
  15. Automatically after running for a few hours this popped up and now over the past 24 the button has been greyed out. Trying to understand what is causing it.
  16. Updated and array did a parity rebuld. Everything appears to be functioning except for being unable to stop my array. Reported as greyed out box with notation " Stop will take the array off-line. Disabled -- BTRFS operation is running" Attaching diagnostics. These issues didn't occur in 6.6.7. unraid-box-diagnostics-20190330-1703.zip
  17. Experienced the same issue, phantom CPU in dashboard. Running rc-6
  18. +1 I've been using it since crashplan changed models/pricing.
  19. I believe your next step is to go into tools and do a new config, of course noting which one is parity. Assign slots and assign parity drive. At that point when you start the array it will at that point rebuild parity.
  20. Yup saw something similar before reversion. Any adjustments to the VM cause that auto to get set. This includes the XML.
  21. I believe my issue with nvme with Marvell is corrected in the latest RC. Had another random issue that forced a reboot but no nvme going missing with unassigned devices plug-in
  22. You were right, my system tripped a parity check after reboot. System is again responsive. Was throwing random 502 errors on the plug-in update page when this was all occuring. Snagged a diags too while occuring if you guys think it's helpful I'll post. I do use xfs encryption but I don't think it's related. I had 100MB/S access to my write access shares with cache and 60ish with straight array targets.
  23. SMB user access. It's not tied to one specific share either. All shares do a similar thing in RC3 for my setup. I hit apply and it just sits there. Tested on multiple browsers IE Edge Firefox Chrome and mobile Safari
  24. Anyone else have issues when changing share security from read-only to read write the webui becomes unresponsive? I have to change tabs to VM or Dashboard to get response again and it never changes the access on the share.