Jump to content

EGOvoruhk

Members
  • Content Count

    91
  • Joined

  • Last visited

Community Reputation

1 Neutral

About EGOvoruhk

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Those drives were sitting physically untouched in a rackmount server for days (Disk0 never actually being physically touched at all, as that was the 8TB parity that was left intact), and went through 2 full checks. I'm curious why both would fail at the exact same time (within 3 seconds, per the log). Could it be indicative of a different issue? They're connected via SFF-8087 fanout cables, and the controller/SFF-8087 end was never unplugged, and hasn't been for over a year, so it shouldn't be any seating issues. They also passed SMART tests after the failure without ever being touched, so obviously not a cabling issue Just wondering where I should be focusing my attention. One drive would make sense, both simultaneously throws me for a loop
  2. Upgraded to 6.8, and about 8 hours later, both my parity drives popped up as disabled simultaneously after some normal usage. Curious if there's anything in the logs that may hint as to why they would both drop at the same time Note: Prior to the 6.8 upgrade I had ran a full parity check with zero errors, then I upgraded my 2x8GB dual parity with a new 10GB drive (now 1x8GB, 1x10GB) and parity sync passed with zero errors, then I upgraded a 4GB data drive with the retired 8TB parity drive, and that sync also passed with zero errors. Then I made a backup of my flash, and upgraded to 6.8 without issues. All my VMs and Dockers were running, and everything seemed normal until later when both parity drives popped up with red Xs I shutdown all VMs/Dockers, ran a SMART test on both parity drives and they came back fine, grabbed my diagnostics, stopped the array, and powered down void-diagnostics-20191228-1756.zip
  3. I came home this weekend to find my 6.3.5 dual parity system had lost one of its parity drives and a data drive. I immediately overnighted some replacement drives from Amazon, along with a 3rd, which should get here in a few hours I was curious, what is the proper method for replacing them. Should I do them both at the same time? The data drive first, and then the parity drive? Or vice versa?
  4. This would only solve the LSI card passthrough, correct? Is there a way to disable MSI-X on other drivers? I passthrough my NIC card as well
  5. If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm
  6. Confirmed issue: http://lime-technology.com/forum/index.php?topic=48374 Can limetech or jonp chime in? Hopefully something that can be fixed, but it's been around for all 6.2 betas
  7. Seems to be a pass through issue on 6.2. It's affecting my RAID card and my NIC that I pass through as well (No network, no disks showing)
  8. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it from the console, diagnostics will be saved to the flash drive. Oh, whoops. I was looking at the WiKi and it was mentioning I had to download something. I ran it and attached What would happen if you set /boot/config/network.cfg and BONDING="NO" There's only one NIC on the VM, so there's no bonding available But it's not a network configuration issue, it's an issue with the passthrough. It's affecting other passed through devices, 6.2 doesn't see the drives on my RAID card either, which pick up just fine on 6.1.9. The network thing was just the first issue I noticed, because I couldn't even get into the system right off the bat
  9. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it from the console, diagnostics will be saved to the flash drive. Oh, whoops. I was looking at the WiKi and it was mentioning I had to download something. I ran it and attached tower-diagnostics-20160419-1149.zip
  10. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it syslog.txt
  11. This is the first 6.2 beta I've tested on my server, but the 6.2 releases seem to have problems with ESXi and hardware passthrough. I've got a fresh install of ESXi 6.0.0.3620759 and a fresh USB install of 6.2 Beta 21, with my Intel 82574L Ethernet card and LSI 2308 RAID card passed through. Upon booting, unRAID fails to identify the network card or appear on my network, and ifconfig returns no IP results. Wiping the USB drive and installing 6.1.9, and booting the same VM up with the same settings, the card is identified fine, and I'm able to browse to it from the network immediately Obviously can't share any logs since I can't get into the system, but, thoughts? Edit: Nevermind, didn't know diagnostic script was built in now. Attached zip tower-diagnostics-20160419-1149.zip
  12. Throw the SSDs into a RAID card as RAID 5, install ESXi onto another USB stick, use unRAID as a VM, and your SSD array as a datastore
  13. Ohhh, good question. I think the question(s) has to be even further expanded. Does it provide for single bit error detection as to the true location of the fault (i.e., the disk with the failure on it) and provide for correction of said error? In case of two failures, does it allow the ability to rebuilt both failures and what is the level of error identify? What is the rebuilt procedure in the case of two devices failing? (Replace one disk at a time with two rebuilds or replace both disks with a single rebuild?) Remember that the more failures there are in an array (thinking of disks here), figuring out which ones are bad is going to be much more difficult. Curious as well. That was one of the things Btrfs brought to the table that I was hoping would eventually be implemented, along with deduplication
  14. Definitely not for me. I use unRAID as just a NAS, I don't use any of the features. I have zero use for it to have network access, so to protect my data, I have it only accessible internally. There's no reason I should have to change that now, and I'm not going to reconfigure it on the firewall every time it stops working. It should work like it's been working
  15. Is this a one-time validation OR is this new version crippled unless it phones home at EVERY reboot???? HOPEFULLY this only will apply to the beta and evaluation versions. If I can't start my server because my internet happens to be down... > Has there been any word on this? I prevent outside network access on my unRAID box at the firewall (and won't be changing it). I thought our keys were tied to the USB, what's to authenticate?