ufopinball

Members
  • Content Count

    234
  • Joined

  • Last visited

  • Days Won

    1

ufopinball last won the day on December 18 2018

ufopinball had the most liked content!

Community Reputation

17 Good

1 Follower

About ufopinball

  • Rank
    Member

Converted

  • Gender
    Undisclosed
  • URL
    http://www.ufopinball.com/

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just found this thread. I've been having the same problem on 6.9.2, in that I cannot change the custom temperature thresholds for my three NVMe drives. When I went to look at /boot/config/smart-one.cfg, it was actually empty. A zero-byte file. What I ended up doing was editing it and adding a space character and saving it, so that it wasn't totally empty. Once I did that, updating the temperatures from the GUI now works properly. No need for manual editing. Something to try if you're seeing the same thing.
  2. Updated smoothly from 6.8.3 --> 6.9.1 --> 6.9.2. No issues to report, everything seems to be running as it should. Thanks LT!
  3. I upgraded my server (AMD Ryzen Threadripper 1950X) and had no problems whatsoever. Uptime is a little over 2 days, have not had issues with booting, dockers, VMs, etc. I'm not sure what you mean by "cpu insulation"? My MB is the ASRock x399 Taichi, if it matters. Full specs in signature line.
  4. I upgraded my server, everything seems to be going smoothly. No issues with updates, Dockers, transfer speeds, etc. I did bump into the NoVNC bug once, but it didn't persist. So far my uptime is 6+ days so I guess I meant to post this earlier. Really enjoying being able to copy files on the server while the family is watching something on Plex. Thanks for all the efforts, LT!!
  5. Updated my server from 6.7.1 to 6.7.2 last week, and have had no problems on my Threadripper 1950X build. Previous Uptime: 38d, 14m Current Uptime: 8d, 5h, 48m Thanks for all the hard work!!
  6. I'm sure you're right. I did buy an AOC-SAS2LP-MV8 at one point in time, I must have lost track of which one I have in the machine. Thanks for the sharp eye!
  7. Hmmm, well I don't have the card in hand, but that's what I thought I bought off eBay some years ago. System Devices reads as follows: RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03) So, dunno?
  8. Upgraded from 6.6.7 to 6.7, haven't had any problems. Previous uptime: 75d, 19h, 58m Current uptime: 9d, 1h, 21m According to "System Devices", my Dell HV52W PERC H310 controller has a Marvell chipset (88SE9485). As far as I know, people on this chipset are not seeing the missing drives issue? I have not had any issues on my system so far. My next step is to do the "amd_iommu=pt", but for the moment, things are running smoothly.
  9. Upgraded from 6.6.6 to 6.6.7, no problems with Dockers (Plex, Sickbeard) or VMs (LAMP, Win10, Win7) Uptime for 6.6.6: 67 days, 9 hours, 30 minutes Uptime for 6.6.7: 15 hours, 4 minutes and counting
  10. Looks like you have quite the variety of drives, so that complicates things. Here's how I would proceed. 1) Build a new array, ultimately your goal will be a solid, reliable array. Don't reuse any of your old drives since we are going to try an extract data. Note that with this method of recovery, I don't think you can rely on any drive giving you back 100%, so if you have to do a rebuild on any given drive (assuming you fully recover that many drives), I don't know how reliable your rebuilt drive would be, either. You're welcome to try it. If not, maybe start with 1 Parity a
  11. If I understand the proposed setup, the SSDs are passed through to the VMs, and are not governed by unRAID. They OS to worry about would be the target OS on each VM. Is that Windows 10?
  12. Are these drives of the same make and model? Do you have a list of the drives? I have swapped working logic boards onto otherwise dead drives in order to recover data, so it can be done. This is not a 100% guarantee, but some recovered data is better than no recovered data. You'll still have to replace the drives with (new) known-working drives, so this is going to be expensive and time consuming. FYI
  13. To begin, are your m.2 drives the SATA variety, or the PCIe x4 variety? The former will run roughly the speed of your other SATA SSDs, the latter should run much, much faster. If you have PCIe x4 m.2 drives, you could try to have a mirrored cache and run all four gaming VMs off the drive. Samsung's SATA SSDs advertise "Up to 540 MBps" where as the PCIe x4 m.2 SSDs offer "Up to 3500 MBps". Even with four VMs running at a time, you should still have a lot of headroom speed? It may depend on what else (if any) you're using your cache drive for, though. The alternative is you ha
  14. Oh, okay. I have not pushed Cache beyond two drives mirrored. I'll keep it in mind for future reference. The most I see is people would like the option to have multiple cache pools. Dunno what priority that has on the wish list.
  15. Noted, but I already have a RAID1 cache pool (see signature). SSD capacities are going up and prices are coming down (relatively speaking). My needs are not so great that I'm out there buying 12TB drives, so someday I'd like to switch over to SSDs. This may be years in the future, but it may also be a slow migration away from HDD to SSD, depending on how often I access data on any given drive. I'm not going to RAID1 my 40TB of existing space as SSD, I rather like the current set up with two Parity drives and ten Data drives. I mean, if you never want to add an SSD to your array, that's