Jump to content

DarkHorse

Members
  • Content Count

    52
  • Joined

  • Last visited

Community Reputation

1 Neutral

About DarkHorse

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. A short follow up: so, after making sure I had things backed up, I stopped the arrary. Previously, this would just hang trying to unshare folders. This time, however, the array stopped. I then reassigned my drive back into the array, and it rebuilt the drive from parity. Everything appears to be fine now.
  2. So, the drive has been unassigned, is being emulated, and a parity-check is currently in progress, 3 hrs to go. I'm still able to access my shares and data. To the best of my memory, here is exactly what I did: 1. stopped the array 2. unassigned the drive 3. started the array. The system immediately began doing a parity check upon starting the array 4. I used the GUI button to "Cancel" the parity check. Without canceling, the "Stop" array button was not active. 5. After canceling the parity check, I was then able to click the "Stop" button to stop the array. Unfortunately, it hung trying to unmount shares. I waited and waited with no progress. I had to hit the physical server reset button to reboot. 6. after system reboot, the array came back up, unassigned drive still in emulated state, services and data running fine, parity check began to run again. Is currently 80% done. I really appreciate any advice you guys have. My UnRAID server has been rock solid, except for the "cable" issue that happens every 4 months or so. I really need to invest in some better gold plated SATA cables. Anyway, as requested, here is a zip of my diagnostics... and some screen shots of my main page. Until I figure out what is going on, I've cancelled the current parity check before it finished. Things are still running fine with the drive in an emulated state. Going to do some backups to a spare drive. Thanks. brownbear-diagnostics-20190529-1304.zip
  3. This technique discussed here is no longer valid in UnRAID 6.7? Stop the array Set the disk to be not installed Start the array Stop the array Set the disk to be the appropriate disk Start the array
  4. So, I've had a red x on a drive before, while running with dual parity drives. Usually just a cabling issue... wiggle the SATA cables and back in business. I would simply stop the array, unassign the drive, start the array, stop again, and reassign, and then start. However, is doing this with 6.7 different now? I stopped the array, unassigned the drive, and started the array again. Now it is rebuilding the parity which will take serveral days. So I now need to wait to add back in the drive? I don't recall older versions of UnRAID doing this. Before, I was able to stop the array, and then assigned the drive back in, and start the array again. Unfortunately, I find the online manual pretty much useless...
  5. sorry, no... haven't looked into it any further.
  6. Well, after reading this thread... I knew I had a Marvell controller on my board, but couldn't recall if any of my drives were using it. So I did ls -al /sys/block/sd* lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sda -> ../devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/host0/target0:0:0/0:0:0:0/block/sda/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sdb -> ../devices/pci0000:00/0000:00:1f.2/ata1/host1/target1:0:0/1:0:0:0/block/sdb/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sdc -> ../devices/pci0000:00/0000:00:1f.2/ata2/host2/target2:0:0/2:0:0:0/block/sdc/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sdd -> ../devices/pci0000:00/0000:00:1f.2/ata3/host3/target3:0:0/3:0:0:0/block/sdd/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sde -> ../devices/pci0000:00/0000:00:1f.2/ata4/host4/target4:0:0/4:0:0:0/block/sde/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sdf -> ../devices/pci0000:00/0000:00:1f.2/ata5/host5/target5:0:0/5:0:0:0/block/sdf/ lrwxrwxrwx 1 root root 0 May 21 12:17 /sys/block/sdg -> ../devices/pci0000:00/0000:00:1f.2/ata6/host6/target6:0:0/6:0:0:0/block/sdg/ and then I checked which controller they were on: lspci | grep 1f.2 00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller (rev 06) looks like I must have known about Marvell issues before, as all my drives are on the Intel controller. I then proceeded with the upgrade from 6.6.7 to 6.7.0 and everything went smoothly. Very nice interface enhancements. Thank you LimeTech.
  7. bastl's solution described above solved my audio issue... thanks.
  8. What's the best way to have a common VM disk image that I can use to sometimes have GPU passthrough, and then sometimes have it run without GPU? Is the best / only way to edit the VM XML definition every time? Or can I setup two XML definitions (with GPU / without GPU) that share a common vm disk image? Curious how / if others are doing this. thanks!
  9. I got it working... I had a previous instance that I had deleted, and I forgot to delete the /mnt/usr/appdata/nextcloud directory from the previous install. It's now working fine. thanks!
  10. Hmm... I do a clean install of the nextcloud docker app and I can't get the initial login / configuration screen to appear. When I launch the WebUI, I get: Internal Server Error The server encountered an internal error and was unable to complete your request. Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report. More details can be found in the server log. no errors that I see in the logs. Anyone else seeing this? Was following SpaceInvader's setup video on youtube, and in the comments I see some others running into the same problem.
  11. I'm having the same problem when using the standard ubuntu template... OVMF based. black screen at boot. Tried different boot parameters as per the linux mint website, to no avail. I have a nvidia 1070 card
  12. For fun, I assigned all 16 cores / 32 threads / 16GB RAM to the High Sierra VM... multicore performance is about 2X of my 2011 3.4Ghz core i7 iMac.
  13. Hmm... got an even better score... I must not have waited long enough after rebooting for the VM to settle down.
  14. So, on my OSX VM running macOS High Sierra 12.13.3, with 4 cores and 8 threads, and 8GB RAM. Nice bump. Note the single core performance, without any optimizations, is better in High Sierra than it was in Sierra. Thanks @gridrunner for the video!
  15. Nice. A little bit of a bump. My minimal OSX VM is running just 1 CPU, both cores, and only 2GB RAM. (my CPUs are Intel E5-2670's). Note, this is running macOS Sierra, 10.12.6.