Jump to content

kennelm

Members
  • Content Count

    69
  • Joined

  • Last visited

Community Reputation

0 Neutral

About kennelm

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. OK, I'll try running it again to see what happens. In this case, I wasn't aware there was anything wrong until after I rebooted, and of course the logs were flushed when it came back up. Had I know there was a problem, I would have captured the diagnostics. LK
  2. So, I noticed the warning from Unraid that I was not running the Dynamix SSD TRIM Plugin, so I set about installing it and having it run on my SSD. I restarted the Unraid server via the console to trigger the unraid warning message again, only to see that the cache drive was declared missing! It took a cold power-down and restart to get it back in the list and get it back into the server config. Should I turn this thing off?
  3. If I exclude a disk that already has files for a given share, does that interfere with my ability to read the files that are already on the excluded disk? Or does an exclusion only affect write operations? Update: I just found this statement in another post: Includes/Excludes -- whether local or global -- only apply to writes. No disks are excluded when showing the contents of a share for reading.
  4. This may be a stupid question ,but is there a way to tell unraid to stop writing to a particular disk at a certain percent full, or free space remaining? I've seen the threads about rebalancing drives, and about guidelines for filling a disk to like 95% or 98%, and I know about the disk threshold settings for warning and critical alerts. But I don't see a clear way to tell the server to stop writing to the disk.
  5. Thanks, Johnnie. I thought so, but wanted to be absolutely certain. The data drives have a ton of media that would be painful to recreate. Sent from my SM-G935V using Tapatalk
  6. OK, I've determined which of the 4TB drives hold data. I slid all six of the 4TB drives into my existing array (one by one) and I determined with 4 are xfs and which 2 are precleared/unformatted. So, now I know exactly which drive is parity and which 4 drives are data drives. I just don't know the order (disk1, disk2, etc.) Any ideas on next steps? Can I just assign the 4 data drives (in no particular order), the parity drive, and the cache drive and start the array? I do NOT want the data drives to be formatted or otherwise changed. If parity is no longer valid, I assume it will be recalculated.
  7. All, I am helping a friend recover from a USB issue that resulted in a non-bootable unraid server. I repaired the USB drive, but it looks like the unraid configuration file got deleted somehow. Fortunately, I see that the Plus key is still intact. Anyway, this much I know: There was: 1 8TB parity drive 4 4Tb data drives 1 500GB M.2 cache drive The server has 2 spare 4TB data drives. I am not sure which two of the 6 4TB drives are the spares. I am not sure which data drive was disk1, disk2, etc. But I am certain the 8TB drive was parity. I have another unraid server that I could use to evaluate the data drives to see if they are formatted or otherwise contain data. What is the best way to recreate the lost configuration? Thanks!
  8. I've discovered that other users are seeing this error because, like me, they are mounting the unraid share points from another host using NFS. It seems unraid 6.6.x and NFS don't mix.
  9. I recently took the 6.6.1 upgrade and now my user shares are periodically disappearing. This has never happened before in all the years I've run my server. I've attached my diagnostic zip. If I shell into the server and navigate to the /mnt/user directory, I get this message: root@Tower:/mnt# cd user -bash: cd: /mnt/user: Transport endpoint is not connected Is this a known 6.6.1 issue? I have temporarily reverted back to 6.4.0 using the downgrade tool. Any help appreciated. tower-diagnostics-20181009-2024.zip
  10. Anybody have experience with the NVIDIA driver?
  11. Thanks to the help received from user 1812, I got my new GT-710 to successfully pass video through to a guest VM running EL7. After I declared premature victory on this, I realized that what I really want is to make the video passthrough work while running the proprietary NVIDIA driver on the guest VM (Initially, it was the default Nouveau? driver I think). The reason I want the NVIDIA driver is for VDPAU decoding of HD encoded content on the guest VM, but I digress. So, I set about getting the NVIDIA driver installed on the guest VM, which succeeded. I've done this plenty of times on physical machines, but never on a VM. Anyway, when I boot the VM using VNC, the desktop launches correctly and, according to the logs (Xorg.0.log), the NVIDIA driver was loaded and working. Schwing. Problem is, if I boot the VM with passthrough of the GT-710 instead of VNC (with the same NVIDIA driver configuration in effect), the desktop never launches. I forced a recreate of the xorg.conf file using nvidia-xconfig (oddly, VNC launches successfully with no xorg.conf in place), but that still didn't work. All I get is a garbled screen on a Samsung 32" LCD TV. Log files and config files attached. Any thoughts or advice appreciated. I will point out that I am not doing anything with ROM passthrough, and I'm not really sure this is required, or if this is just a performance optimization for gaming, etc. I have 2 video cards on the host machine: the onboard for unraid itself and the GT-710 for the guest VM. Larry Additional pertinent info: uname -a Linux localhost.localdomain 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [root@localhost ~]# rpm -qa | grep nvidia nvidia-graphics-devices-1.0-6.el7.noarch nvidia-graphics-helpers-0.0.30-33.el7.x86_64 nvidia-graphics-long-lived-390.48-150.el7.centos.x86_64 nvidia-graphics390.48-libs-390.48-4.el7.centos.x86_64 nvidia-graphics390.48-390.48-4.el7.centos.x86_64 nvidia-detect-390.48-2.el7.elrepo.x86_64 nvidia-graphics390.48-kmdl-3.10.0-693.21.1.el7-390.48-4.el7.centos.x86_64 nvidia-settings -v nvidia-settings: version 390.48 (buildmeister@swio-display-x86-rhel47-07) Thu Mar 22 01:06:23 PDT 2018 The NVIDIA X Server Settings tool. This program is used to configure the NVIDIA Linux graphics driver. For more detail, please see the nvidia-settings(1) man page. Xorg.0.log xorg.conf.txt
  12. No joy in Mudville. Based on what I'm reading, this older G 210 card probably won't pass through correctly, so I just ordered the GT 710 per your post in this thread:
  13. Doh. I'm an idiot. I totally overlooked a BIOS selection to designate a primary video card. Now, the unraid screen appears on the onboard card, and the Vm display is on the NVIDIA card. Again, the VM works the first time around, but on the second try, I now get a ROM error: 2018-04-27T00:16:09.953404Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:04:00.0 Device option ROM contents are probably invalid (check dmesg). Skip option ROM probe with rombar=0, or load from file with romfile= I'm going to guess that I need to dump the ROM per the video above.
  14. Good catch. Somehow this got disabled in the BIOS. I've re-enabled the on-board GPU, but I'm not sure how to designate as "primary." Point being, the second launch of the VM still fails. Interestingly, before I launched the VM the first time, I noticed 3 choices in the VM graphics card drop-down box: VNC, Nvidia, and Intel. After the second launch failed, I checked again and the Intel on-board GPU was not in the list anymore....
  15. Thanks. I'll dig into it. Meanwhile, my Asrock Z370 Pro4 motherboard has a robust onboard video capability. Can this serve as one of the GPUs and the nvidia as the other? Or do I need 2 PCIe GPUs? I assume the primary GPU is allocated to the unraid system and the guest machine gets the other GPU? Or do I have that backwards?