kennelm

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by kennelm

  1. This may be a stupid question ,but is there a way to tell unraid to stop writing to a particular disk at a certain percent full, or free space remaining? I've seen the threads about rebalancing drives, and about guidelines for filling a disk to like 95% or 98%, and I know about the disk threshold settings for warning and critical alerts. But I don't see a clear way to tell the server to stop writing to the disk.
  2. Thanks, Johnnie. I thought so, but wanted to be absolutely certain. The data drives have a ton of media that would be painful to recreate. Sent from my SM-G935V using Tapatalk
  3. OK, I've determined which of the 4TB drives hold data. I slid all six of the 4TB drives into my existing array (one by one) and I determined with 4 are xfs and which 2 are precleared/unformatted. So, now I know exactly which drive is parity and which 4 drives are data drives. I just don't know the order (disk1, disk2, etc.) Any ideas on next steps? Can I just assign the 4 data drives (in no particular order), the parity drive, and the cache drive and start the array? I do NOT want the data drives to be formatted or otherwise changed. If parity is no longer valid, I assume it will be recalculated.
  4. All, I am helping a friend recover from a USB issue that resulted in a non-bootable unraid server. I repaired the USB drive, but it looks like the unraid configuration file got deleted somehow. Fortunately, I see that the Plus key is still intact. Anyway, this much I know: There was: 1 8TB parity drive 4 4Tb data drives 1 500GB M.2 cache drive The server has 2 spare 4TB data drives. I am not sure which two of the 6 4TB drives are the spares. I am not sure which data drive was disk1, disk2, etc. But I am certain the 8TB drive was parity. I have another unraid server that I could use to evaluate the data drives to see if they are formatted or otherwise contain data. What is the best way to recreate the lost configuration? Thanks!
  5. I've discovered that other users are seeing this error because, like me, they are mounting the unraid share points from another host using NFS. It seems unraid 6.6.x and NFS don't mix.
  6. I recently took the 6.6.1 upgrade and now my user shares are periodically disappearing. This has never happened before in all the years I've run my server. I've attached my diagnostic zip. If I shell into the server and navigate to the /mnt/user directory, I get this message: root@Tower:/mnt# cd user -bash: cd: /mnt/user: Transport endpoint is not connected Is this a known 6.6.1 issue? I have temporarily reverted back to 6.4.0 using the downgrade tool. Any help appreciated. tower-diagnostics-20181009-2024.zip
  7. Anybody have experience with the NVIDIA driver?
  8. Thanks to the help received from user 1812, I got my new GT-710 to successfully pass video through to a guest VM running EL7. After I declared premature victory on this, I realized that what I really want is to make the video passthrough work while running the proprietary NVIDIA driver on the guest VM (Initially, it was the default Nouveau? driver I think). The reason I want the NVIDIA driver is for VDPAU decoding of HD encoded content on the guest VM, but I digress. So, I set about getting the NVIDIA driver installed on the guest VM, which succeeded. I've done this plenty of times on physical machines, but never on a VM. Anyway, when I boot the VM using VNC, the desktop launches correctly and, according to the logs (Xorg.0.log), the NVIDIA driver was loaded and working. Schwing. Problem is, if I boot the VM with passthrough of the GT-710 instead of VNC (with the same NVIDIA driver configuration in effect), the desktop never launches. I forced a recreate of the xorg.conf file using nvidia-xconfig (oddly, VNC launches successfully with no xorg.conf in place), but that still didn't work. All I get is a garbled screen on a Samsung 32" LCD TV. Log files and config files attached. Any thoughts or advice appreciated. I will point out that I am not doing anything with ROM passthrough, and I'm not really sure this is required, or if this is just a performance optimization for gaming, etc. I have 2 video cards on the host machine: the onboard for unraid itself and the GT-710 for the guest VM. Larry Additional pertinent info: uname -a Linux localhost.localdomain 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [root@localhost ~]# rpm -qa | grep nvidia nvidia-graphics-devices-1.0-6.el7.noarch nvidia-graphics-helpers-0.0.30-33.el7.x86_64 nvidia-graphics-long-lived-390.48-150.el7.centos.x86_64 nvidia-graphics390.48-libs-390.48-4.el7.centos.x86_64 nvidia-graphics390.48-390.48-4.el7.centos.x86_64 nvidia-detect-390.48-2.el7.elrepo.x86_64 nvidia-graphics390.48-kmdl-3.10.0-693.21.1.el7-390.48-4.el7.centos.x86_64 nvidia-settings -v nvidia-settings: version 390.48 (buildmeister@swio-display-x86-rhel47-07) Thu Mar 22 01:06:23 PDT 2018 The NVIDIA X Server Settings tool. This program is used to configure the NVIDIA Linux graphics driver. For more detail, please see the nvidia-settings(1) man page. Xorg.0.log xorg.conf.txt
  9. No joy in Mudville. Based on what I'm reading, this older G 210 card probably won't pass through correctly, so I just ordered the GT 710 per your post in this thread:
  10. Doh. I'm an idiot. I totally overlooked a BIOS selection to designate a primary video card. Now, the unraid screen appears on the onboard card, and the Vm display is on the NVIDIA card. Again, the VM works the first time around, but on the second try, I now get a ROM error: 2018-04-27T00:16:09.953404Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:04:00.0 Device option ROM contents are probably invalid (check dmesg). Skip option ROM probe with rombar=0, or load from file with romfile= I'm going to guess that I need to dump the ROM per the video above.
  11. Good catch. Somehow this got disabled in the BIOS. I've re-enabled the on-board GPU, but I'm not sure how to designate as "primary." Point being, the second launch of the VM still fails. Interestingly, before I launched the VM the first time, I noticed 3 choices in the VM graphics card drop-down box: VNC, Nvidia, and Intel. After the second launch failed, I checked again and the Intel on-board GPU was not in the list anymore....
  12. Thanks. I'll dig into it. Meanwhile, my Asrock Z370 Pro4 motherboard has a robust onboard video capability. Can this serve as one of the GPUs and the nvidia as the other? Or do I need 2 PCIe GPUs? I assume the primary GPU is allocated to the unraid system and the guest machine gets the other GPU? Or do I have that backwards?
  13. Thanks for offering to help. See attached. Additional info: root@Tower:~# lspci 00:00.0 Host bridge: Intel Corporation Device 3ec2 (rev 07) 00:14.0 USB controller: Intel Corporation 200 Series PCH USB 3.0 xHCI Controller 00:14.2 Signal processing controller: Intel Corporation 200 Series PCH Thermal Subsystem 00:16.0 Communication controller: Intel Corporation 200 Series PCH CSME HECI #1 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode] 00:1b.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #17 (rev f0) 00:1b.2 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #19 (rev f0) 00:1c.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #1 (rev f0) 00:1c.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #5 (rev f0) 00:1d.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #9 (rev f0) 00:1f.0 ISA bridge: Intel Corporation Device a2c9 00:1f.2 Memory controller: Intel Corporation 200 Series PCH PMC 00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio 00:1f.4 SMBus: Intel Corporation 200 Series PCH SMBus Controller 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V 02:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) 04:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 210] (rev a2) 04:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) lspci -v (Video card only) 04:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 210] (rev a2) (prog-if 00 [VGA controller]) Subsystem: eVga.com. Corp. GT218 [GeForce 210] Flags: bus master, fast devsel, latency 0, IRQ 125 Memory at de000000 (32-bit, non-prefetchable) Memory at c0000000 (64-bit, prefetchable) Memory at d0000000 (64-bit, prefetchable) I/O ports at d000 Expansion ROM at 000c0000 [disabled] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: vfio-pci 04:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) Subsystem: eVga.com. Corp. High Definition Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at df080000 (32-bit, non-prefetchable) Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Kernel driver in use: vfio-pci root@Tower:~# lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 004: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Bus 001 Device 005: ID 046d:c31c Logitech, Inc. Keyboard K120 Bus 001 Device 003: ID 0461:4d15 Primax Electronics, Ltd Dell Optical Mouse Bus 001 Device 002: ID 0781:5406 SanDisk Corp. Cruzer Micro U3 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub tower-diagnostics-20180425-1933.zip
  14. What is stubbing? I did notice in the unraid log that there were messages related to resetting low-speed USB devices for the mouse and keyboard. That only happened on the second launch of the VM but not on the 1st.
  15. All, I run a CentOS physical server that I am trying to migrate to my unRAID server as a VM to simply things. I got the VM up and running just fine using VNC, but not when I use a graphics card passthrough. Because this is a media server that needs graphics card passthrough, I'll need to get that part working. I've encountered a weird problem where, if I do a full restart the entire UnRAID box, I can successfully launch the VM and the CentOS gnome desktop appears on my TV attached over HDMI to the nvidia graphics card. Media playback/decoding is good and everything seems to work. But, once the VM is already up, if I use the console to stop and restart the VM, I can't get the desktop to launch over the HDMI connection. Instead, I get a progress bar with the message "CentOS Linux 7[ OK ] Starting Virtualization daemon". The desktop never appears. Yet, If I force stop the VM, switch to VNC, the VM launches the desktop just fine. And, if I reboot the unRAID box completely, the VM launches the desktop over HDMI just fine. So, it seems it all works the first time, but not any subsequent VM restart. Here are my particulars: unRAID Plus version 6.4.0 Model: N/A M/B: ASRock - Z370 Pro4 CPU: Intel® Core™ i7-8700K CPU @ 3.70GHz HVM: Enabled IOMMU: Enabled Cache: 384 kB, 1536 kB, 12288 kB Memory: 16 GB (max. installable capacity 64 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.14.13-unRAID x86_64 OpenSSL: 1.0.2n Nvidia GeForce 210 graphics card Here is the VM XML: <domain type='kvm'> <name>CentOS</name> <uuid>a23726d2-fd1e-bd96-b30c-65e4105ea468</uuid> <metadata> <vmtemplate xmlns="unraid" name="CentOS" icon="centos.png" os="centos"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>1</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/a23726d2-fd1e-bd96-b30c-65e4105ea468_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/CentOS/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/larry/CentOS/CentOS-7-x86_64-DVD-1511 (1).iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt'/> <target dir='shares'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:e1:a1:4c'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0461'/> <product id='0x4d15'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc31c'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> Anything here look wrong? Or might explain the weird issue when I restart the VM? Any insight would be appreciated. If I can't get past this problem, then there's no point in virtualizing this server.
  16. OK. I ran du on the two directories and found that lost+found ballooned by 500GB when copied by rsync. the rest of the directories all match. I'm not sure what to make of this, other then to think all is OK and I'll continue with my conversion from RFS to XFS...
  17. OK, I ran a file system check on the RFS disk and no problems were found. I also searched for soft links (find . -type l -ls) and found none, which I suppose could have explained how more data came out than appeared to be there. Per the wiki, I ran this sync command to transfer the data from the original 1TB RFS disk1 to a much larger (and empty) XFS disk5: rsync -avPX /mnt/disk1/ /mnt/disk5/ Here is the df-k output following the rsync: Filesystem 1K-blocks Used Available Use% Mounted on /dev/md1 976732736 742195824 234536912 76% /mnt/disk1 /dev/md5 2928835740 1266835888 1661999852 44% /mnt/disk5 Anyone have any idea how disk1 could produce enough data to result in the allocations shown for disk5? Somehow, 760GB resulted in 1.3TB. Could this be because the file systems are different? Larry
  18. PWM, I ran a "df -k" and the source drive has around 700GB allocated. Did not run du. Because of this I was puzzled as to why the target drive was getting more data than the source! I didn't think to check for hard/soft links that could be pulling in other files located elsewhere. I guess this could explain it if rsync follows the links. I thought the default behavior for rsync was to not follow the softlink but rather copy the softlink. Thanks for the tip. Larry
  19. Thanks for the reply. I was just making sure because of the way the sections are titled. There's a subtle difference between "Drives currently running on v6" and "Drives formatted on v4 and now running in v6" Larry
  20. I'm currently on the latest v6 unRAID version, and I'm working through the conversion of my RFS drives to XFS. These drives were originally built on V4.4 IIRC. Anyway, I got 3 of 4 drives converted successfully, but the 4th drive is giving me weirdness. It's a 1TB drive with about 700GB allocated per the unRAID WebGui. When I rsync-ed this content to a 1TB swap drive, it blew up on space. I'm thinking there might be a problem with the file system so I am planning to do a check and possibly a repair. So I started reading this: https://wiki.lime-technology.com/index.php?title=Check_Disk_Filesystems My question is this: If the RFS drives were originally built on v4.4, and now I've upgraded through v5 and on to the latest v6.x, which guidance do I follow? Should I just use the v6 WebGui as described in the link? Or do I need to drop down to the section entitled "Drives Formatted with ReiserFS using UnRAID v4?" The wiki is a tad ambiguous on this. Larry
  21. I'm having similar issues when I run Docker on unraid 6.3.5. Shfs is 100% cpu and the only solution is a hard reboot which triggers an 8-hour parity check. I've had 3 of these in the last 4 days.
  22. I'm running unRAID version 6.3.5 and all my drives are still formatted as ReiserFS. Based on advice I've seen on the forum, I'd like to switch to xfs. I have a spare drive pre-cleared and available to add to the array and start the process of juggling files around. I've been reading the wiki on file system conversion : https://wiki.lime-technology.com/File_System_Conversion Near the bottom, it talks about a mirroring procedure, where step 10 is about swapping drive assignments. But, I think I saw elsewhere that this "trick" is not working on the latest 6.x releases. Can anyone comment on whether there are any known issues with using the mirroring technique on version 6.3.5? Thanks!
  23. Thanks for the reply. Can you explain how the new report accounts for the deep inspection that the old report accounted for? This one just seems to be at such a high level it makes me wonder...