ufopinball

Members
  • Posts

    237
  • Joined

  • Last visited

  • Days Won

    1

ufopinball last won the day on December 18 2018

ufopinball had the most liked content!

1 Follower

Converted

  • Gender
    Undisclosed
  • URL
    http://www.ufopinball.com/

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ufopinball's Achievements

Explorer

Explorer (4/14)

17

Reputation

  1. UnRaid identifies drives by serial number, so motherboard SATA ports should definitely be fine. Add-in SATA PCIe cards should also be fine, but I don't believe USB connections are supported. Not sure about the M.2 slot idea as I haven't tried it. But, since it's newer technology, hopefully they've thought to support things like this. Just make sure all your drives show up and are properly accounted for before you start the array.
  2. Hmmm, they’re no longer running as my primary system, but they seemed okay before. Any idea how to get the ASRock x399 Taichi to play nice with LSI controllers? Or what’s a good option outside of LSI and Marvell?
  3. Sadly, the ASRock x399 Taichi MB just doesn't play well with LSI controller cards. Here's a conversation on Reddit that goes into greater detail: https://www.reddit.com/r/unRAID/comments/98kdyp/lsi_920116i_and_asrock_taichi_x399/ I've got the same setup, and I've never gotten it to work. I ended up going with dual AOC-SAS2LP-MV8 (Marvell based) controller cards. That was many years ago, so there may be newer alternatives, but I know they work with this MB and a Threadripper 1950X. The two Dell Perc H310 (LSI based) controllers that ASRock didn't like, work just fine in an ASUS MB, so I know they're functionally okay.
  4. Just found this thread. I've been having the same problem on 6.9.2, in that I cannot change the custom temperature thresholds for my three NVMe drives. When I went to look at /boot/config/smart-one.cfg, it was actually empty. A zero-byte file. What I ended up doing was editing it and adding a space character and saving it, so that it wasn't totally empty. Once I did that, updating the temperatures from the GUI now works properly. No need for manual editing. Something to try if you're seeing the same thing.
  5. Updated smoothly from 6.8.3 --> 6.9.1 --> 6.9.2. No issues to report, everything seems to be running as it should. Thanks LT!
  6. I upgraded my server (AMD Ryzen Threadripper 1950X) and had no problems whatsoever. Uptime is a little over 2 days, have not had issues with booting, dockers, VMs, etc. I'm not sure what you mean by "cpu insulation"? My MB is the ASRock x399 Taichi, if it matters. Full specs in signature line.
  7. I upgraded my server, everything seems to be going smoothly. No issues with updates, Dockers, transfer speeds, etc. I did bump into the NoVNC bug once, but it didn't persist. So far my uptime is 6+ days so I guess I meant to post this earlier. Really enjoying being able to copy files on the server while the family is watching something on Plex. Thanks for all the efforts, LT!!
  8. Updated my server from 6.7.1 to 6.7.2 last week, and have had no problems on my Threadripper 1950X build. Previous Uptime: 38d, 14m Current Uptime: 8d, 5h, 48m Thanks for all the hard work!!
  9. I'm sure you're right. I did buy an AOC-SAS2LP-MV8 at one point in time, I must have lost track of which one I have in the machine. Thanks for the sharp eye!
  10. Hmmm, well I don't have the card in hand, but that's what I thought I bought off eBay some years ago. System Devices reads as follows: RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03) So, dunno?
  11. Upgraded from 6.6.7 to 6.7, haven't had any problems. Previous uptime: 75d, 19h, 58m Current uptime: 9d, 1h, 21m According to "System Devices", my Dell HV52W PERC H310 controller has a Marvell chipset (88SE9485). As far as I know, people on this chipset are not seeing the missing drives issue? I have not had any issues on my system so far. My next step is to do the "amd_iommu=pt", but for the moment, things are running smoothly.
  12. Upgraded from 6.6.6 to 6.6.7, no problems with Dockers (Plex, Sickbeard) or VMs (LAMP, Win10, Win7) Uptime for 6.6.6: 67 days, 9 hours, 30 minutes Uptime for 6.6.7: 15 hours, 4 minutes and counting
  13. Looks like you have quite the variety of drives, so that complicates things. Here's how I would proceed. 1) Build a new array, ultimately your goal will be a solid, reliable array. Don't reuse any of your old drives since we are going to try an extract data. Note that with this method of recovery, I don't think you can rely on any drive giving you back 100%, so if you have to do a rebuild on any given drive (assuming you fully recover that many drives), I don't know how reliable your rebuilt drive would be, either. You're welcome to try it. If not, maybe start with 1 Parity and 1 Data and work your way up from there. I'm assuming you know which of the old drives are data and which were parity. This method recovers the data treating the old array drives as JBOD. 2) Take say the STBV5000100, buy another exact model drive. Last time, I bought a used working drive off E-Bay. Test the newer drive, make sure it works and is reliable. Replace the bad drive's board with the new drive's board. Plug the bad drive into the server and use something like Unassigned Devices to mount it, then see how much data you can copy off of it. Once you have extracted as much data as you can, unmount and remove your bad drive. Swap the controller boards back. Bad drive goes on the shelf in case you need it for further recovery. The newer drive can be pre-cleared and added to the array. Repeat this step for all drives. Something I heard was that reallocated sectors are recorded somewhere on the controller board. I heard it quite a long time ago, so I don't know if it is/was true. If it is, your recovery may involve accessing some incorrect sectors, which is why I think the data isn't guaranteed to be 100%, but again anything is better than 0%. This should also be non-destructive, so you could still use other methods to recover your data if you like. I have not heard of the diode fix, nor have I ever attempted to alter a controller board in any way. All I have done is a straight board swap, and hope that any data losses are livable. Thankfully, this isn't something that I have had to do regularly, but it has worked once or twice. PS: Dunno about the warranty, but I'd skip the soldering iron if you intend to go this route.
  14. If I understand the proposed setup, the SSDs are passed through to the VMs, and are not governed by unRAID. They OS to worry about would be the target OS on each VM. Is that Windows 10?
  15. Are these drives of the same make and model? Do you have a list of the drives? I have swapped working logic boards onto otherwise dead drives in order to recover data, so it can be done. This is not a 100% guarantee, but some recovered data is better than no recovered data. You'll still have to replace the drives with (new) known-working drives, so this is going to be expensive and time consuming. FYI