Jump to content

ufopinball

Members
  • Content Count

    229
  • Joined

  • Last visited

  • Days Won

    1

ufopinball last won the day on December 18 2018

ufopinball had the most liked content!

Community Reputation

17 Good

1 Follower

About ufopinball

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • URL
    http://www.ufopinball.com/

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm sure you're right. I did buy an AOC-SAS2LP-MV8 at one point in time, I must have lost track of which one I have in the machine. Thanks for the sharp eye!
  2. Hmmm, well I don't have the card in hand, but that's what I thought I bought off eBay some years ago. System Devices reads as follows: RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03) So, dunno?
  3. Upgraded from 6.6.7 to 6.7, haven't had any problems. Previous uptime: 75d, 19h, 58m Current uptime: 9d, 1h, 21m According to "System Devices", my Dell HV52W PERC H310 controller has a Marvell chipset (88SE9485). As far as I know, people on this chipset are not seeing the missing drives issue? I have not had any issues on my system so far. My next step is to do the "amd_iommu=pt", but for the moment, things are running smoothly.
  4. Upgraded from 6.6.6 to 6.6.7, no problems with Dockers (Plex, Sickbeard) or VMs (LAMP, Win10, Win7) Uptime for 6.6.6: 67 days, 9 hours, 30 minutes Uptime for 6.6.7: 15 hours, 4 minutes and counting
  5. Looks like you have quite the variety of drives, so that complicates things. Here's how I would proceed. 1) Build a new array, ultimately your goal will be a solid, reliable array. Don't reuse any of your old drives since we are going to try an extract data. Note that with this method of recovery, I don't think you can rely on any drive giving you back 100%, so if you have to do a rebuild on any given drive (assuming you fully recover that many drives), I don't know how reliable your rebuilt drive would be, either. You're welcome to try it. If not, maybe start with 1 Parity and 1 Data and work your way up from there. I'm assuming you know which of the old drives are data and which were parity. This method recovers the data treating the old array drives as JBOD. 2) Take say the STBV5000100, buy another exact model drive. Last time, I bought a used working drive off E-Bay. Test the newer drive, make sure it works and is reliable. Replace the bad drive's board with the new drive's board. Plug the bad drive into the server and use something like Unassigned Devices to mount it, then see how much data you can copy off of it. Once you have extracted as much data as you can, unmount and remove your bad drive. Swap the controller boards back. Bad drive goes on the shelf in case you need it for further recovery. The newer drive can be pre-cleared and added to the array. Repeat this step for all drives. Something I heard was that reallocated sectors are recorded somewhere on the controller board. I heard it quite a long time ago, so I don't know if it is/was true. If it is, your recovery may involve accessing some incorrect sectors, which is why I think the data isn't guaranteed to be 100%, but again anything is better than 0%. This should also be non-destructive, so you could still use other methods to recover your data if you like. I have not heard of the diode fix, nor have I ever attempted to alter a controller board in any way. All I have done is a straight board swap, and hope that any data losses are livable. Thankfully, this isn't something that I have had to do regularly, but it has worked once or twice. PS: Dunno about the warranty, but I'd skip the soldering iron if you intend to go this route.
  6. If I understand the proposed setup, the SSDs are passed through to the VMs, and are not governed by unRAID. They OS to worry about would be the target OS on each VM. Is that Windows 10?
  7. Are these drives of the same make and model? Do you have a list of the drives? I have swapped working logic boards onto otherwise dead drives in order to recover data, so it can be done. This is not a 100% guarantee, but some recovered data is better than no recovered data. You'll still have to replace the drives with (new) known-working drives, so this is going to be expensive and time consuming. FYI
  8. To begin, are your m.2 drives the SATA variety, or the PCIe x4 variety? The former will run roughly the speed of your other SATA SSDs, the latter should run much, much faster. If you have PCIe x4 m.2 drives, you could try to have a mirrored cache and run all four gaming VMs off the drive. Samsung's SATA SSDs advertise "Up to 540 MBps" where as the PCIe x4 m.2 SSDs offer "Up to 3500 MBps". Even with four VMs running at a time, you should still have a lot of headroom speed? It may depend on what else (if any) you're using your cache drive for, though. The alternative is you have 4 VMs, and 4 SSD NVMe type drives. Pass through 1 drive for each VM and you should enjoy dedicated performance for each VM from its assigned drive. If performance is an absolute must, maybe this is the way to go?
  9. Oh, okay. I have not pushed Cache beyond two drives mirrored. I'll keep it in mind for future reference. The most I see is people would like the option to have multiple cache pools. Dunno what priority that has on the wish list.
  10. Noted, but I already have a RAID1 cache pool (see signature). SSD capacities are going up and prices are coming down (relatively speaking). My needs are not so great that I'm out there buying 12TB drives, so someday I'd like to switch over to SSDs. This may be years in the future, but it may also be a slow migration away from HDD to SSD, depending on how often I access data on any given drive. I'm not going to RAID1 my 40TB of existing space as SSD, I rather like the current set up with two Parity drives and ten Data drives. I mean, if you never want to add an SSD to your array, that's fine. I understand the options available via cache, but I want to do this specifically as an array drive. Since this configuration appears to be supported, I'll start with a small 1TB SSD and see how things go.
  11. Good to know, thanks for the info!
  12. Thanks, I will run a parity check here and there to make sure. The drive was one of my old cache pool drives and has not given me issues. It's still fairly young, as far as SSDs go. It's also a 1TB drive and I don't really know that I'll be eating up so much of it, I guess it will come down to usage once I have it installed. As I mentioned in the other post, this is mostly for quick-access to Read existing data. It's not going to be a heavy R/W sort of drive (like a cache drive) so hopefully TRIM won't be such a big issue.
  13. I have a cache drive. An array drive means I get Parity protection in case of a drive failure, and cache or UD doesn't offer that. The performance boost I'm looking for is on the Read end of things. Writing to this drive will likely be relatively rare, it's just data I don't want to wait for a HDD to spin up for.
  14. I upgraded some stuff via Black Friday, and now I have a 1TB SATA SSD that I'd like to use as an array drive. I understand there is no TRIM for array drives. Otherwise, is this configuration supported? Is anyone else here doing this? Also, in order to add this to an existing array, presumably I have to at least do a pre-clear (zero the drive) and have a pre-clear signature written. Anything else I should be aware of? Thanks in advance!
  15. Upgraded from 6.6.5 to 6.6.6 a few days ago ... no smoke, no fires.