Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 01/19/20 in Report Comments

  1. 3 points
  2. 3 points
    Has to do with how POSIX-compliant we want to be. Here are the issues: If 2 dirents (directory entries) refer to the same file, then if you 'stat' either dirent it should return: a) 'st_nlink' will be set to 2 in this case, and b) the same inode number in 'st_ino'. Prior to 6.8 release a) was correct, but b) was not (it returns an internal FUSE inode number associated with dirents). This is incorrect behavior and can confuse programs such as 'rsync', but fixes NFS stale file handle issue. To fix this, you can tell FUSE to pass along the actual st_ino of the underlying file instead of it's own FUSE inode number. This works except for 2 problems: 1. If the file is physically moved to a different file system, the st_ino field changes. This causes NFS stale file handles. 2. There is still a FUSE delay because it caches stat data (default for 1 second). For example, if kernel asks for stat data for a file (or directory), FUSE will ask user-space filesystem to provide it. Then if kernel asks for stat data again for same object, if time hasn't expired FUSE will just return the value it read last time. If timeout expired, then FUSE will again ask user-space filesystem to provide it. Hence in our example above, one could remove one of the dirents for a file and then immediately 'stat' the other dirent, and that stat data will not reflect fact that 'st_nlink' is now 1 - it will still say 2. Obviously whether this is an issue depends entirely on timing (the worse kind of bugs). In the FUSE example code there is this comment in regards to hard link support: static void *xmp_init(struct fuse_conn_info *conn, struct fuse_config *cfg) { (void) conn; cfg->use_ino = 1; cfg->nullpath_ok = 1; /* Pick up changes from lower filesystem right away. This is also necessary for better hardlink support. When the kernel calls the unlink() handler, it does not know the inode of the to-be-removed entry and can therefore not invalidate the cache of the associated inode - resulting in an incorrect st_nlink value being reported for any remaining hardlinks to this inode. */ cfg->entry_timeout = 0; cfg->attr_timeout = 0; cfg->negative_timeout = 0; return NULL; } But the problem is the kernel is very "chatty" when it comes to directory listings. Basically it re-'stat's the entire parent directory tree each time it wants to 'stat' a file returned by READDIR. If we have the 'attr_timeout' set to 0, then each one of those 'stat's results in a round trip from kernel space to user space (and processing done by user-space filesystem). I have set it up so that if you enable hard link support, those timeouts are as above and hence you see huge slowdown because of all the overhead. I could remove that code that sets the timeouts to 0, but as I mentioned, not sure what "bugs" this might cause for other users - our policy is, better to be slow than to be wrong. So this is kinda where it stands. We have ideas for fixing but will involve modifying FUSE which is not a small project.
  3. 2 points
    Just an FYI guys, I think we have this sorted thanks to @limetech! We were able to recreate the issue and in an internal build today, we think we may have it squashed. Stay tuned...
  4. 1 point
  5. 1 point
    First and foremost I would recommend turning off PCI ACS override, reboot then post the IOMMU groups here in a quote in order to see what have you passed through. Things to try: 1) enable "VFIO allow unsafe interrupts" 2) try to boot ur VM with the PCI passed though and restart/shutdown 3) try to boot ur VM without any devices passed though (except the GPU) and restart/shutdown 4) remove the PCI devices from the passthrough config 5) try to boot ur vm with GPU passed though and the USB devices selected on your PCI USB Controller and restart/shutdown Every time you restart or shutdown your VM, keep a tab open with the System logs so that you can see what is going on. Then next to each try (2,3,5) post your results so that we can compare and better understand your situation.
  6. 1 point
    Thanks, I corrected my post. It's been a few weeks for me in the unraid world. I got to so many parity repairs, which are caused by hard reset. It seems like a similar behaviour. What would be the best test to confirm it ? Should I remove passing this usb pci controler in Unraid OS options and try to see if windows VM accept to reboot? Usually never works for me. Be aware I also ran in a case where no VM were working anymore ; I got to reinstall unraid.
  7. 1 point
    Hi @dboris, The PCI USB controller you are using is different than mine and the others that were mentioned in the posts linked by @peter_sm. So I’m guessing there is a bigger issue here regarding the passthrough of PCI USB controllers (maybe in the latest unraid build?). Lets hope someone from @limetech will see this and collect all the data posted and start debugging the issue. Crossing fingers 🤞
  8. 1 point
    Hello, Thanks to Modo johnnie.black who linked me this topic, I can confirm I also have that bug. I'm also passing a usb port integrated to the MB where an external USB hub is plugged : ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller | USB controller (08:00.0) This is the only usb 3.1 gen 2 red port I have to the back of my Asus X399 Strix motherboard. I use a KVM switch to pass mouse/keyboard/so on from Unraid (other usb 2.0 ports) to windows VM (one usb 3.1). Here's the diag and log while W10 running just after reboot : server-diagnostics-20200211-1503.zip Sincerely,
  9. 1 point
    Hi peter, I have read both yours and the other guy’s posts, and that’s why I posted this here as a bug. Let’s hope we’re gonna draw some attention and someone actually help us resolve our issues.
  10. 1 point
    I do have this issue that unraid freez when shutting down VM And here some more info, but there is no respons at all on this issue. But looks like there is more that have this and we might have more attention to this?
  11. 1 point
    Thanks, corrected in next version
  12. 1 point
    I tried with DirectIO yes, and DirectIO yes plus case insensitive yes, no difference (see attached results). Given that a disk share over SMB showed good performance, I am sceptical that it is a SMB issue, my money is on a performance problem in the shfs write path. DiskSpeedResult_Ubuntu_Cache.xlsx
  13. 1 point
    I can confirm this is a bug and it affects more than just adding a network controller. Will ask the dev team to look into this...
  14. 1 point
    Verified, yes that works. You only need to add a single line in SMB Extras: case sensitive = yes Note: "yes" == "true" and is case insensitive
  15. 1 point
    Thanks. Using keymap fr_be seems to be working just fine. After some searching also found that the nl_be keymap was indeed incomplete to say the least.
  16. 1 point
    On another note, I wish there was a CE option where I could just send in anonymous statistics and logging info to LimeTech. Everyone else wants it an it seems like it would be very useful for LT and they are the only ones I would do it for!
  17. 1 point
    @dalben Check how the shares for "appdata" and "system" are configured. I bet they don't exist on your cache device. Adjust your paths like the following:
  18. 1 point
    Early last year we spent a lot of time trying to figure out wtf was preventing higher resolution. At one time it did work correctly and we ended finding an older 'x' package (like xorg-server or xterm - can't remember which) where it did work. But then we needed to update those packages and now it's stuck back at low res. In debugging this one quickly veers down the X rabbit hole. We basically determined there were bigger issues to tackle and gave up devoting more time to this.
  19. 1 point
    @limetech First of all, thank you for taking the time to dig into this. From my much more limited testing, the issue seems to be a painful one to track down. I upgraded yesterday and while this tweak solves listdir times, stat times for missing files in large directories is still bugged (observation 2 in the below post): For convenience, I reproduced in Linux and wrote this simple script in bash: # unraid cd /mnt/user/myshare mkdir testdir cd testdir touch dummy{000000..200000} # client sudo mkdir /myshare sudo mount -t cifs -o username=guest //192.168.1.100/myshare /myshare while true; do start=$SECONDS; stat /myshare/testdir/does_not_exist > /dev/null 2>&1 ; end=$SECONDS; echo "$((end-start)) "; done On 6.8.x, each call takes 7-8s (vs 0-1s on previous versions), regardless of hard link support. The time complexity is nonlinear with the number of files (calls go to 15s if I increase the number of files by 50% to 300k).
  20. 1 point
    Any idea on when we will see 6.9-RC1? There have been enough updates/bug fixes and security issues that I don't feel comfortable rolling back to 6.8-RC7, but I really need the new kernel version.
  21. 1 point
    Solved for me. I do get some questionable driver related messages Jan 26 20:13:30 vesta kernel: igb: loading out-of-tree module taints kernel. Jan 26 20:13:30 vesta kernel: igb 0000:06:00.0 eth1: mixed HW and IP checksum settings. Jan 26 20:13:30 vesta kernel: igb 0000:07:00.0 eth2: mixed HW and IP checksum settings.
  22. 1 point
    Hard link support was added because certain docker apps would use them in the appdata share.
  23. 1 point
    I made a capture of my system. Issues start at the line "BUG: unable to handle kernel NULL pointer..." Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6faa Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fab Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fab Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fac Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fac Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fad Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6fad Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f68 Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f79 Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f6a Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f6b Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f6c Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6f6d Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6ffc Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6ffc Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6ffd Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6ffd Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6faf Jan 23 18:19:58 vesta kernel: EDAC sbridge: Seeking for: PCI ID 8086:6faf Jan 23 18:19:58 vesta kernel: EDAC MC0: Giving out device to module sb_edac controller Broadwell SrcID#0_Ha#0: DEV 0000:ff:12.0 (INTERRUPT) Jan 23 18:19:58 vesta kernel: EDAC sbridge: Ver: 1.1.2 Jan 23 18:19:58 vesta kernel: BTRFS: device fsid 15cea296-4aa8-4a45-b03f-1d4bd2587221 devid 4 transid 37220822 /dev/sdg1 Jan 23 18:19:58 vesta kernel: BTRFS: device fsid 15cea296-4aa8-4a45-b03f-1d4bd2587221 devid 3 transid 37220822 /dev/sdf1 Jan 23 18:19:58 vesta kernel: BTRFS: device fsid 15cea296-4aa8-4a45-b03f-1d4bd2587221 devid 1 transid 37220822 /dev/sdh1 Jan 23 18:19:58 vesta kernel: BTRFS: device fsid 15cea296-4aa8-4a45-b03f-1d4bd2587221 devid 2 transid 37220822 /dev/sdk1 Jan 23 18:19:58 vesta kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 Jan 23 18:19:58 vesta kernel: PGD 1ff7e0d067 P4D 1ff7e0d067 PUD 1fbaaff067 PMD 0 Jan 23 18:19:58 vesta kernel: Oops: 0000 [#1] SMP NOPTI Jan 23 18:19:58 vesta kernel: CPU: 2 PID: 2532 Comm: modprobe Not tainted 4.19.94-Unraid #1 Jan 23 18:19:58 vesta kernel: Hardware name: Supermicro X10SRA-F/X10SRA-F, BIOS 2.1a 10/24/2018 Jan 23 18:19:58 vesta kernel: RIP: 0010:kernfs_name_hash+0x9/0x6d Jan 23 18:19:58 vesta kernel: Code: 48 33 04 25 28 00 00 00 74 05 e8 c3 3d ea ff 48 83 c4 60 4c 89 f0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 48 83 c9 ff 31 c0 49 89 f8 <f2> ae 48 f7 d1 8d 79 ff 31 c9 48 39 cf 74 1f 49 0f be 04 08 48 ff Jan 23 18:19:58 vesta kernel: RSP: 0018:ffffc90006b27cb8 EFLAGS: 00010246 Jan 23 18:19:58 vesta kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffffffffff Jan 23 18:19:58 vesta kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 Jan 23 18:19:58 vesta kernel: RBP: ffff889fcab65198 R08: 0000000000000000 R09: ffffffff811ac100 Jan 23 18:19:58 vesta kernel: R10: ffffea007f2ad900 R11: ffff889fff075001 R12: ffff889fcab65348 Jan 23 18:19:58 vesta kernel: R13: 0000000000000000 R14: ffff889ff3a6c190 R15: 0000000000000000 Jan 23 18:19:58 vesta kernel: FS: 0000150ae6c37b80(0000) GS:ffff889fff680000(0000) knlGS:0000000000000000 Jan 23 18:19:58 vesta kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 23 18:19:58 vesta kernel: CR2: 0000000000000000 CR3: 0000001ff3b5a002 CR4: 00000000003606e0 Jan 23 18:19:58 vesta kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Jan 23 18:19:58 vesta kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Jan 23 18:19:58 vesta kernel: Call Trace: Jan 23 18:19:58 vesta kernel: kernfs_find_ns+0x5e/0xa8 Jan 23 18:19:58 vesta kernel: kernfs_remove_by_name_ns+0x49/0x75 Jan 23 18:19:58 vesta kernel: remove_files+0x38/0x5a Jan 23 18:19:58 vesta kernel: sysfs_remove_group+0x55/0x6f Jan 23 18:19:58 vesta kernel: sysfs_remove_groups+0x28/0x2f Jan 23 18:19:58 vesta kernel: device_remove_attrs+0x33/0x63 Jan 23 18:19:58 vesta kernel: device_del+0x18d/0x2ed Jan 23 18:19:58 vesta kernel: cdev_device_del+0x10/0x26 Jan 23 18:19:58 vesta kernel: posix_clock_unregister+0x1c/0x41 Jan 23 18:19:58 vesta kernel: ptp_clock_unregister+0x69/0x6d Jan 23 18:19:58 vesta kernel: igb_ptp_stop+0x1a/0x44 [igb] Jan 23 18:19:58 vesta kernel: igb_remove+0x39/0xfd [igb] Jan 23 18:19:58 vesta kernel: pci_device_remove+0x36/0x8e Jan 23 18:19:58 vesta kernel: device_release_driver_internal+0x144/0x225 Jan 23 18:19:58 vesta kernel: driver_detach+0x6d/0x77 Jan 23 18:19:58 vesta kernel: bus_remove_driver+0x60/0x7c Jan 23 18:19:58 vesta kernel: pci_unregister_driver+0x1c/0x7f Jan 23 18:19:58 vesta kernel: __se_sys_delete_module+0x10f/0x1ac Jan 23 18:19:58 vesta kernel: ? exit_to_usermode_loop+0x55/0xa2 Jan 23 18:19:58 vesta kernel: do_syscall_64+0x57/0xf2 Jan 23 18:19:58 vesta kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Jan 23 18:19:58 vesta kernel: RIP: 0033:0x150ae6d72047 Jan 23 18:19:58 vesta kernel: Code: 73 01 c3 48 8b 0d 49 7e 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 19 7e 0c 00 f7 d8 64 89 01 48 Jan 23 18:19:58 vesta kernel: RSP: 002b:00007ffc617bb338 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0 Jan 23 18:19:58 vesta kernel: RAX: ffffffffffffffda RBX: 0000000000427cf0 RCX: 0000150ae6d72047 Jan 23 18:19:58 vesta kernel: RDX: 0000000000000000 RSI: 0000000000000800 RDI: 0000000000427d58 Jan 23 18:19:58 vesta kernel: RBP: 0000000000427d58 R08: 1999999999999999 R09: 0000000000000000 Jan 23 18:19:58 vesta kernel: R10: 0000150ae6de9ac0 R11: 0000000000000206 R12: 0000000000000000 Jan 23 18:19:58 vesta kernel: R13: 0000000000000000 R14: 0000000000427d58 R15: 0000000000425480 Jan 23 18:19:58 vesta kernel: Modules linked in: sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper intel_cstate intel_uncore intel_rapl_perf i2c_i801 ipmi_ssif igb(-) i2c_algo_bit i2c_core smartpqi ahci libahci scsi_transport_sas wmi pcc_cpufreq ipmi_si button [last unloaded: atlantic] Jan 23 18:19:58 vesta kernel: CR2: 0000000000000000 Jan 23 18:19:58 vesta kernel: ---[ end trace bfd77ca2011f6527 ]--- Jan 23 18:19:58 vesta kernel: RIP: 0010:kernfs_name_hash+0x9/0x6d Jan 23 18:19:58 vesta kernel: Code: 48 33 04 25 28 00 00 00 74 05 e8 c3 3d ea ff 48 83 c4 60 4c 89 f0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 48 83 c9 ff 31 c0 49 89 f8 <f2> ae 48 f7 d1 8d 79 ff 31 c9 48 39 cf 74 1f 49 0f be 04 08 48 ff Jan 23 18:19:58 vesta kernel: RSP: 0018:ffffc90006b27cb8 EFLAGS: 00010246 Jan 23 18:19:58 vesta kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffffffffff Jan 23 18:19:58 vesta kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 Jan 23 18:19:58 vesta kernel: RBP: ffff889fcab65198 R08: 0000000000000000 R09: ffffffff811ac100 Jan 23 18:19:58 vesta kernel: R10: ffffea007f2ad900 R11: ffff889fff075001 R12: ffff889fcab65348 Jan 23 18:19:58 vesta kernel: R13: 0000000000000000 R14: ffff889ff3a6c190 R15: 0000000000000000 Jan 23 18:19:58 vesta kernel: FS: 0000150ae6c37b80(0000) GS:ffff889fff680000(0000) knlGS:0000000000000000 Jan 23 18:19:58 vesta kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 23 18:19:58 vesta kernel: CR2: 0000000000000000 CR3: 0000001ff3b5a002 CR4: 00000000003606e0 Jan 23 18:19:58 vesta kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Jan 23 18:19:58 vesta kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Jan 23 18:19:59 vesta rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="2507" x-info="https://www.rsyslog.com"] start After the call trace the system hangs, and I need to do a power off / power on to get a "proper" boot up
  24. 1 point
    Thanks, these confirm my suspicions, the quad NIC isn't being detect after a reboot, so nothing LT can do about this, look for a bios update (and/or reset bios to defaults) or try a PCIe different slot if available.