josetann

Members
  • Posts

    162
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

josetann's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. I've read about issues with btrfs, but haven't seen anything saying WHY. I've only used btrfs in a proxmox setup with two 1TB NVME drives in a mirror, with unRAID I've only used xfs (except when unRAID was ReiserFS only, but we don't talk about that). So you're saying that if I use at least somewhat quality hardware (currently using enterprise drives, I have used SMR consumer drives in the past though) and have ECC RAM (so shouldn't have ram issues), I shouldn't have the issues others are reporting with btrfs? Do you think it's more reliable than zfs with semi-quality hardware?
  2. It may still be the recommendation. I've had literally zero problems using xfs, I have no problem recommending it. I just really like the idea of being able to take snapshots (I had a nightmare where one of my family members who knew better, clicked something they shouldn't and now everyone's mad at me because I should have known about ransomware). I've been a bit lazy when it comes to proper backups, snapshots would help mitigate some of the risk that's a result of said laziness. Plus it's cool, I like cool.
  3. Currently have two mechanical 14TB drives (neither SMR) with one data and one parity. Doing a new build, something a tiny bit more power efficient than my Z420. Going to replace one of the 14TB drives with a 15.36TB U.2 SSD. Don't worry, I've already been told that I should under no circumstances use an SSD as a drive in an unRAID array, but I'm going to anyway 😀 Currently using xfs, but I'd really, REALLY like to have native snapshot support. I figure now is as good a time as any to change filesystems, so, what would you do? Switch to the new hotness ZFS, because it's new and awesome and features!!!! Switch to BTRFS. It's not as flashy, but it is a bit more mature...at least when it comes to unRAID support. Stay with XFS, snapshots are for wimps! I wouldn't be looking to setup a ZFS pool as cache, so I know that a lot of ZFS's awesomeness would be lost on my setup (just a single drive/vdev in an array, plus a mechanical drive for parity). I also don't need ZFS's ability to eat RAM to speed up access, since the data drive would be an enterprise ssd. Yes, writes would be abysmally slow, but that hasn't been a concern yet (read speeds are another issue, I'm having the infamous MacOS smb issues, but I've stopped messing with that until the new system is up and running). One minute I'm leaning ZFS because it really was made for this, even if I'm severely limiting its potential; the next minute I'm leaning toward BTRFS because I envision much lower system resource usage and better overall stability (not because BTRFS is better than ZFS, but because there's been more time to work out the kinks between unRAID and BTRFS). Or perhaps I should just rip out the old data drive, throw in the new U.2 drive, do a rebuild, and ask again next decade.
  4. I have a Windows 10 VM that's working decently after some troubleshooting (needed to enable MSI mode on nvidia graphics/sound, else it could bring entire unraid server down when the VM was powered off/rebooted). I have an issue that I've worked around, but it still bugs me (you know how it is). I bought a Sonnet Allegro Pro USB 3.0 PCIe card with four dedicated controllers (one controller per usb port). One is passed through to the Windows 10 VM. I have a cheap four port hub plugged in so anything I plug into the hub is connected directly to the VM. Needed it this way so I could use a usb amp (when it's turned off it's not visible to the OS, i.e. unplugged). Anyway.... The Windows 10 VM hangs at the bios screen (where it shows it passed the memory check) if the IR receiver is plugged in. Doesn't matter if the IR receiver is plugged into the usb hub, or plugged directly into the dedicated usb port. Just hangs there. I can unplug, boot, then plug the receiver in, works fine until I reboot. It doesn't have an issue with the other usb devices (currently a Logitech unifying usb adapter and an SMSL amp). Simple workaround, I have the IR receiver plugged into one of the shared usb ports and assigned it to the Windows 10 VM. Works fine, but it still bugs me, you know? IR receiver info: TopSeed Technology Corp. eHome Infrared Transceiver (1784:0006) Dedicated usb controller info: Fresco Logic FL1100 USB 3.0 Host Controller | USB controller (09:00.0) Syslog when booting with the IR receiver plugged into the hub (VM hangs): Oct 11 19:12:36 Tower kernel: vgaarb: device changed decodes: PCI:0000:04:00.0,olddecodes=io+mem,decodes=io+mem:owns=none Oct 11 19:12:36 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:12:36 Tower kernel: br0: port 3(vnet1) entered disabled state Oct 11 19:12:36 Tower kernel: device vnet1 entered promiscuous mode Oct 11 19:12:36 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:12:36 Tower kernel: br0: port 3(vnet1) entered forwarding state Oct 11 19:12:38 Tower kernel: vfio_ecap_init: 0000:04:00.0 hiding ecap 0x19@0x900 Oct 11 19:12:38 Tower kernel: vfio-pci 0000:09:00.0: enabling device (0400 -> 0402) Syslog when booting without the IR receiver plugged in (VM boots, can plug in once Windows boot screen displays): Oct 11 19:13:34 Tower kernel: vgaarb: device changed decodes: PCI:0000:04:00.0,olddecodes=io+mem,decodes=io+mem:owns=none Oct 11 19:13:34 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:13:34 Tower kernel: br0: port 3(vnet1) entered disabled state Oct 11 19:13:34 Tower kernel: device vnet1 entered promiscuous mode Oct 11 19:13:34 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:13:34 Tower kernel: br0: port 3(vnet1) entered forwarding state Oct 11 19:13:36 Tower kernel: vfio_ecap_init: 0000:04:00.0 hiding ecap 0x19@0x900 Oct 11 19:13:36 Tower kernel: vfio-pci 0000:09:00.0: enabling device (0400 -> 0402) Oct 11 19:13:47 Tower kernel: kvm: zapping shadow pages for mmio generation wraparound Oct 11 19:13:47 Tower kernel: kvm: zapping shadow pages for mmio generation wraparound I don't see anything useful. If there's any additional information you'd like, just let me know. Edit: In case it helps, here's a syslog of the Windows 10 VM successfully booting with the current setup (dedicated usb port passed through, plus the IR receiver passed through separately): Oct 11 19:38:16 Tower kernel: vgaarb: device changed decodes: PCI:0000:04:00.0,olddecodes=io+mem,decodes=io+mem:owns=none Oct 11 19:38:16 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:38:16 Tower kernel: br0: port 3(vnet1) entered disabled state Oct 11 19:38:16 Tower kernel: device vnet1 entered promiscuous mode Oct 11 19:38:16 Tower kernel: br0: port 3(vnet1) entered blocking state Oct 11 19:38:16 Tower kernel: br0: port 3(vnet1) entered forwarding state Oct 11 19:38:18 Tower kernel: vfio_ecap_init: 0000:04:00.0 hiding ecap 0x19@0x900 Oct 11 19:38:18 Tower kernel: vfio-pci 0000:09:00.0: enabling device (0400 -> 0402) Oct 11 19:38:20 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:20 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:26 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:27 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:27 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:30 Tower kernel: kvm: zapping shadow pages for mmio generation wraparound Oct 11 19:38:30 Tower kernel: kvm: zapping shadow pages for mmio generation wraparound Oct 11 19:38:31 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:31 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:32 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:32 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:35 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:36 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:36 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:37 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:37 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci Oct 11 19:38:37 Tower kernel: usb 1-1.6: reset full-speed USB device number 4 using ehci-pci
  5. Was the usb card in MSI mode? No, I didn't have any other PCIe devices passed through, just the graphics card. Enabling MSI mode seems to have fixed it, I just performed a poweroff/poweron for the Windows VM after 2.5 days of uptime, no issues. In fact, the last time I powered it down was to add a USB card (in particular, the Sonnet Allegro Pro USB 3.0 PCIe card). Passed through one of the USB ports (each has its own controller). Working flawlessly so far.
  6. Ok, so the troubleshooting continued. Took a while since I had to let the machine sit for so long between reboots/shutdowns. Looks like enabling MSI for the graphics card (both for the graphics device AND the sound device...still crashed with MSI enabled for just the graphics device) may have fixed the issue. Still have the occasional Kodi crash (have app that monitors it and restarts if necessary, crashes are rare enough to not be a real bother) and some sound issues (start to get weird "popping" that's not fixed by a VM powerdown or reboot, but only by unplugging/replugging the hdmi cable), but it's working for the most part. Hope that passing through a dedicated usb port and using an amplifier with usb input will fix that. I'll update if I notice it crashing again. So far it worked after leaving it overnight (but with Hyper-V disabled, that was one of the things I was testing), one reboot after 36 hours with just MSI enabled (left Hyper-V enabled), and one poweroff/restart after 36 hours with MSI enabled. Do note that I did get a crash with Hyper-V off and MSI enabled for the graphics device but MSI off for the sound device (same graphics card, GT 1030).
  7. Running unRAID 6.3.5. I have a Windows 10 VM that works pretty well (few small issues, but nothing major). For some reason, if it's been up for a while, shutting down or rebooting the Windows 10 VM will cause it to hang, the network goes crazy (router looks like a Christmas tree, I lose internet connection, I have to unplug the network cable to the unRAID server to get the router working again), and the server becomes more and more inaccessible (webgui is the first to go, eventually the entire thing will lockup). I've tried issuing a reboot and a shutdown from the command line, it won't actually do so. I end up having to do a hard poweroff, not good. It will not exhibit the behaviour shortly after booting. I.e. I can sit here and reboot it over and over and it won't have an issue. It has to be online for an indeterminate amount of time, which makes troubleshooting a bit difficult. I believe the issue stems from the GT 1030 I have passed through to the Windows 10 guest. I cannot be for certain. The other VM uses a virtual graphics card, absolutely no problem when I shut it down or reboot. Here's what you've all been waiting for, the end of the syslog. You can ignore the first four lines regarding mover, this was just to show you that nothing important happened before the 14:36:22 mark. Also, the first two lines regarding usb resetting is normal (when it properly reboots, those messages are repeated multiple times). Sep 26 03:40:01 Tower root: mover started Sep 26 03:40:01 Tower root: mover finished Sep 27 03:40:01 Tower root: mover started Sep 27 03:40:01 Tower root: mover finished Sep 27 14:36:22 Tower kernel: usb 3-2.2: reset full-speed USB device number 3 using xhci_hcd Sep 27 14:36:22 Tower kernel: usb 1-1.5: reset full-speed USB device number 4 using ehci-pci Sep 27 14:37:23 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: Sep 27 14:37:23 Tower kernel: 5-...: (1 GPs behind) idle=f91/140000000000000/0 softirq=3578633/3578634 fqs=14959 Sep 27 14:37:23 Tower kernel: (detected by 12, t=60002 jiffies, g=5962201, c=5962200, q=10759) Sep 27 14:37:23 Tower kernel: Task dump for CPU 5: Sep 27 14:37:23 Tower kernel: qemu-system-x86 R running task 0 9930 1 0x00000008 Sep 27 14:37:23 Tower kernel: ffff881fa2d20cc0 ffff881fdf157b00 ffff881fd2753fc0 ffff880f97a10000 Sep 27 14:37:23 Tower kernel: 0000000000000000 ffffc9000d08fb88 ffffffff8167c00e 0000000000000002 Sep 27 14:37:23 Tower kernel: ffff881fa2d20cc0 7fffffffffffffff ffff881fa2d20cc0 ffffc9000d08fd20 Sep 27 14:37:23 Tower kernel: Call Trace: Sep 27 14:37:23 Tower kernel: [<ffffffff8167c00e>] ? __schedule+0x2b1/0x46a Sep 27 14:37:23 Tower kernel: [<ffffffff8167c24b>] schedule+0x84/0x95 Sep 27 14:37:23 Tower kernel: [<ffffffff8147872c>] ? qi_submit_sync+0x2b2/0x2d0 Sep 27 14:37:23 Tower kernel: [<ffffffff8147f255>] ? modify_irte+0xd9/0x10f Sep 27 14:37:23 Tower kernel: [<ffffffff8147f2af>] ? intel_irq_remapping_deactivate+0x24/0x26 Sep 27 14:37:23 Tower kernel: [<ffffffff81087f79>] ? __irq_domain_deactivate_irq+0x28/0x39 Sep 27 14:37:23 Tower kernel: [<ffffffff81087f87>] ? __irq_domain_deactivate_irq+0x36/0x39 Sep 27 14:37:23 Tower kernel: [<ffffffff810891d2>] ? irq_domain_deactivate_irq+0x18/0x25 Sep 27 14:37:23 Tower kernel: [<ffffffff81086dc8>] ? irq_shutdown+0x4f/0x5c Sep 27 14:37:23 Tower kernel: [<ffffffff81084b7a>] ? __free_irq+0x10d/0x20a Sep 27 14:37:23 Tower kernel: [<ffffffff81084d23>] ? free_irq+0x69/0x78 Sep 27 14:37:23 Tower kernel: [<ffffffff814ed9f6>] ? vfio_intx_set_signal+0x32/0x190 Sep 27 14:37:23 Tower kernel: [<ffffffff814ee135>] ? vfio_intx_disable+0x33/0x56 Sep 27 14:37:23 Tower kernel: [<ffffffff814ee17d>] ? vfio_pci_set_intx_trigger+0x25/0x141 Sep 27 14:37:23 Tower kernel: [<ffffffff814ee640>] ? vfio_pci_set_irqs_ioctl+0x87/0xa4 Sep 27 14:37:23 Tower kernel: [<ffffffff814ecc42>] ? vfio_pci_ioctl+0x5d1/0x9d5 Sep 27 14:37:23 Tower kernel: [<ffffffff81069ae7>] ? wake_up_q+0x51/0x51 Sep 27 14:37:23 Tower kernel: [<ffffffff814e8c8c>] ? vfio_device_fops_unl_ioctl+0x1e/0x28 Sep 27 14:37:23 Tower kernel: [<ffffffff81130112>] ? vfs_ioctl+0x13/0x2f Sep 27 14:37:23 Tower kernel: [<ffffffff81130642>] ? do_vfs_ioctl+0x49c/0x50a Sep 27 14:37:23 Tower kernel: [<ffffffff8113921f>] ? __fget+0x72/0x7e Sep 27 14:37:23 Tower kernel: [<ffffffff811306ee>] ? SyS_ioctl+0x3e/0x5c Sep 27 14:37:23 Tower kernel: [<ffffffff8167f537>] ? entry_SYSCALL_64_fastpath+0x1a/0xa9
  8. Clicked the "compute" link for three different shares, here's what I get a couple hours or so later: ps -ef | egrep -i share_size root 1554 1123 0 17:26 ? 00:00:00 [share_size] <defunct> root 1570 1123 0 17:26 ? 00:00:00 [share_size] <defunct> root 1586 1123 0 17:26 ? 00:00:00 [share_size] <defunct> root 2191 2176 0 18:54 pts/0 00:00:00 egrep -i share_size I'm running 5.0b4 off a flash drive.
  9. I started with an old version of unRAID that was built on Slackware 12.0. When I upgraded to 4.5.4, I had to upgrade the Slackware version to 12.1, then 12.2, and finally to 13.0. Once everything was running perfect, I decided to wipe the drive and put on the 64-bit version of 13.1. That also worked. Could have just been my particular setup though.
  10. Using Proxmox VE. Posted about it on their forums, they decided it was a bug in kvm. I found an old bug report that said it had been fixed. The version of kvm I'm using should have the patch. I guess it depends on whether Proxmox VE got kvm from debian (the distro they're built on top of) or straight from the source. Yup, have virtio compiled in. The OS can see /dev/vdX, but the unRAID gui cannot. I have to use SCSI emulation (I could use IDE instead, but I'm already using four drives, and want to add more in the future). I put in a request for /dev/vdX support (at least for emhttp to recognize the drive exists, I can deal with compiling support into the kernel), we'll see. Part of the reason I thought it was sooooooo slow, is because it was recomputing parity, instead of just checking it. Checking parity ran much faster (still slower than natively though). Single disk reads are still fast, 60MB/s vs 70MB/s native. unRAID can't spin the drives down, but I can set the timeout on the host OS ("hdparm -S242 /dev/sda" will set the timeout to one hour). If you build a system with vt-d support, you can simply attach entire drive controllers (and other hardware devices) directly to the guest. Heck, you could probably boot a stock unRAID setup directly off the usb drive. Has to be supported by the cpu and the motherboard. If you bought a high-end Intel system in the past couple years, or a mid-range system in the past year, you might have it. If you bought an AMD system, you're probably out of luck (they do exist though). Anyways, I'm fairly happy with the current setup. Now I can mess with unRAID without bringing everything else down. Have two other VMs running (PBX in a Flash and ZoneMinder). Very cool stuff.
  11. That's the same udev version that Slackware 13.0 is running. I think 4.5.4 was built off Slackware-current at the time of release (after 13.0, but before 13.1).
  12. What Slackware version are you using? I think 4.5.4 is somewhere between 13.0 and the newly released 13.1. I have it running on 13.0 now. I did have it running on the 64-bit version of 13.1, but had a few issues (which I also had on 13.0 32-bit until I redid everything and made changes one by one). If you're running the 32bit version of 13.0, I have a kernel download you could use that already has everything setup for you. It is a bit bloated (eventually gave up and enabled nearly every scsi option to get it to work under kvm), but it should get the job done.
  13. KVM is supposed to have an option to set the drive's serial number. I can't get it to work. I bit the bullet and did a restore (or rather, an initconfig). Parity check is taking much longer than usual, about ten times as long. Not sure why, could be because I'm using scsi instead of ide emulation. If I could figure out how to convince unRAID that parity is valid, this would almost be a workable solution. Waiting over 33 hours for it to rebuild parity is crazy.
  14. josetann

    unRAID 64bit ...

    I decided to attempt this as well. Caused a lot of headaches. Whatever config it started up with, it's sticking with it until a reboot. I.e. my parity drive wasn't recognized, I added it back, but it kept insisting that my parity drive was not installed. The disk.cfg was written with the correct information. I had to reboot before it would accept the change (even killing emhttp and restarting it wouldn't work, only a reboot). I'm letting it do a parity check as we speak. I'll try to update if I come across any other problems or figure out a solution.