Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 01/26/20 in Report Comments

  1. 3 points
  2. 1 point
    Yes. This may be a problem for certain windows apps because you might create a file named "MyFile" but windows is permitted to open as "MYFILE" and this will result in "file not found". In 6.8.3 we added a share SMB config setting called "Case-sensitive names" which can take on the values Auto, Yes, or Forced lower. Here's the Help text: > The default setting of **auto** allows clients that support case sensitive filenames (Linux CIFSVFS) > to tell the Samba server on a per-packet basis that they wish to access the file system in a case-sensitive manner (to support UNIX > case sensitive semantics). No Windows system supports case-sensitive filenames so setting this option to **auto** is the same as > setting it to No for them; however, the case of filenames passed by a Windows client will be preserved. This setting can result > in reduced peformance with very large directories because Samba must do a filename search and match on passed names. > > A setting of **Yes** means that files are created with the case that the client passes, and only accessible using this same case. > This will speed very large directory access, but some Windows applications may not function properly with this setting. For > example, if "MyFile" is created but a Windows app attempts to open "MYFILE" (which is permitted in Windows), it will not be found. > > A value of **Forced lower** is special: the case of all incoming client filenames, not just new filenames, will be set to lower-case. > In other words, no matter what mixed case name is created on the Windows side, it will be stored and accessed in all lower-case. This > ensures all Windows apps will properly find any file regardless of case, but case will not be preserved in folder listings. > Note this setting should only be configured for new shares. "Auto" selects the current behavior (and is actually the current default). If you need both faster directory operations because you have huge number of files in a directory, AND you have to preserve window case-insensitive filename semantics, then you can use "Forced lower". But if you set this for an existing share, you will need to run a shell command to change all the file and directory names to all-lower case. Then if you create "MyFile" it will be stored and show up in directory listings as "myfile". If windows tries to open "MYFILE" samba will change requested filename on-the-fly to "myfile" and thus be able to find it.
  3. 1 point
    Here's something to test. Please disable the use of your AMD GPU in the VM and use VNC. Then keep the USB controller passed through. See if that works. If so, this is a GPU, not a USB controller issue. The reason this test is a good one to perform is simply because AMD-based GPUs are notorious for causing the exact issues being described here (VM works fine until shutdown/restart). This is because most AMD GPUs don't support function level resets which are vital for good experience in a VM. The same could be the case with the USB controller, but that's harder to say as we don't have as much experience with those devices. In general when it comes to any VFIO / PCI device assignment, some hardware just plain doesn't work well with it. The VFIO project aims to do the best possible job it can to support generic PCI device assignment to VMs, but there are just some cases where the way the hardware was designed, it just doesn't work correctly and there is little we can do here at LT to resolve these types of issues. That said, if things were working for you on a previous release that aren't working now, please be sure to include that detail in your posts and be sure to mention which version of Unraid was the last known working version for your setup.
  4. 1 point
  5. 1 point
    First and foremost I would recommend turning off PCI ACS override, reboot then post the IOMMU groups here in a quote in order to see what have you passed through. Things to try: 1) enable "VFIO allow unsafe interrupts" 2) try to boot ur VM with the PCI passed though and restart/shutdown 3) try to boot ur VM without any devices passed though (except the GPU) and restart/shutdown 4) remove the PCI devices from the passthrough config 5) try to boot ur vm with GPU passed though and the USB devices selected on your PCI USB Controller and restart/shutdown Every time you restart or shutdown your VM, keep a tab open with the System logs so that you can see what is going on. Then next to each try (2,3,5) post your results so that we can compare and better understand your situation.
  6. 1 point
    Thanks, I corrected my post. It's been a few weeks for me in the unraid world. I got to so many parity repairs, which are caused by hard reset. It seems like a similar behaviour. What would be the best test to confirm it ? Should I remove passing this usb pci controler in Unraid OS options and try to see if windows VM accept to reboot? Usually never works for me. Be aware I also ran in a case where no VM were working anymore ; I got to reinstall unraid.
  7. 1 point
    Hi @dboris, The PCI USB controller you are using is different than mine and the others that were mentioned in the posts linked by @peter_sm. So I’m guessing there is a bigger issue here regarding the passthrough of PCI USB controllers (maybe in the latest unraid build?). Lets hope someone from @limetech will see this and collect all the data posted and start debugging the issue. Crossing fingers 🤞
  8. 1 point
    Hello, Thanks to Modo johnnie.black who linked me this topic, I can confirm I also have that bug. I'm also passing a usb port integrated to the MB where an external USB hub is plugged : ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller | USB controller (08:00.0) This is the only usb 3.1 gen 2 red port I have to the back of my Asus X399 Strix motherboard. I use a KVM switch to pass mouse/keyboard/so on from Unraid (other usb 2.0 ports) to windows VM (one usb 3.1). Here's the diag and log while W10 running just after reboot : server-diagnostics-20200211-1503.zip Sincerely,
  9. 1 point
    Hi peter, I have read both yours and the other guy’s posts, and that’s why I posted this here as a bug. Let’s hope we’re gonna draw some attention and someone actually help us resolve our issues.
  10. 1 point
    I do have this issue that unraid freez when shutting down VM And here some more info, but there is no respons at all on this issue. But looks like there is more that have this and we might have more attention to this?
  11. 1 point
    Thanks, corrected in next version
  12. 1 point
    I tried with DirectIO yes, and DirectIO yes plus case insensitive yes, no difference (see attached results). Given that a disk share over SMB showed good performance, I am sceptical that it is a SMB issue, my money is on a performance problem in the shfs write path. DiskSpeedResult_Ubuntu_Cache.xlsx
  13. 1 point
    I can confirm this is a bug and it affects more than just adding a network controller. Will ask the dev team to look into this...
  14. 1 point
    Verified, yes that works. You only need to add a single line in SMB Extras: case sensitive = yes Note: "yes" == "true" and is case insensitive
  15. 1 point
    Thanks. Using keymap fr_be seems to be working just fine. After some searching also found that the nl_be keymap was indeed incomplete to say the least.
  16. 1 point
    On another note, I wish there was a CE option where I could just send in anonymous statistics and logging info to LimeTech. Everyone else wants it an it seems like it would be very useful for LT and they are the only ones I would do it for!
  17. 1 point
    @dalben Check how the shares for "appdata" and "system" are configured. I bet they don't exist on your cache device. Adjust your paths like the following:
  18. 1 point
    Early last year we spent a lot of time trying to figure out wtf was preventing higher resolution. At one time it did work correctly and we ended finding an older 'x' package (like xorg-server or xterm - can't remember which) where it did work. But then we needed to update those packages and now it's stuck back at low res. In debugging this one quickly veers down the X rabbit hole. We basically determined there were bigger issues to tackle and gave up devoting more time to this.
  19. 1 point
    @limetech First of all, thank you for taking the time to dig into this. From my much more limited testing, the issue seems to be a painful one to track down. I upgraded yesterday and while this tweak solves listdir times, stat times for missing files in large directories is still bugged (observation 2 in the below post): For convenience, I reproduced in Linux and wrote this simple script in bash: # unraid cd /mnt/user/myshare mkdir testdir cd testdir touch dummy{000000..200000} # client sudo mkdir /myshare sudo mount -t cifs -o username=guest //192.168.1.100/myshare /myshare while true; do start=$SECONDS; stat /myshare/testdir/does_not_exist > /dev/null 2>&1 ; end=$SECONDS; echo "$((end-start)) "; done On 6.8.x, each call takes 7-8s (vs 0-1s on previous versions), regardless of hard link support. The time complexity is nonlinear with the number of files (calls go to 15s if I increase the number of files by 50% to 300k).
  20. 1 point
    Yeah I'm excited for the improved monitoring for temps and stuff.
  21. 1 point
    Any idea on when we will see 6.9-RC1? There have been enough updates/bug fixes and security issues that I don't feel comfortable rolling back to 6.8-RC7, but I really need the new kernel version.
  22. 1 point
    Solved for me. I do get some questionable driver related messages Jan 26 20:13:30 vesta kernel: igb: loading out-of-tree module taints kernel. Jan 26 20:13:30 vesta kernel: igb 0000:06:00.0 eth1: mixed HW and IP checksum settings. Jan 26 20:13:30 vesta kernel: igb 0000:07:00.0 eth2: mixed HW and IP checksum settings.
  23. 1 point
    Hard link support was added because certain docker apps would use them in the appdata share.
  24. 1 point
    Has to do with how POSIX-compliant we want to be. Here are the issues: If 2 dirents (directory entries) refer to the same file, then if you 'stat' either dirent it should return: a) 'st_nlink' will be set to 2 in this case, and b) the same inode number in 'st_ino'. Prior to 6.8 release a) was correct, but b) was not (it returns an internal FUSE inode number associated with dirents). This is incorrect behavior and can confuse programs such as 'rsync', but fixes NFS stale file handle issue. To fix this, you can tell FUSE to pass along the actual st_ino of the underlying file instead of it's own FUSE inode number. This works except for 2 problems: 1. If the file is physically moved to a different file system, the st_ino field changes. This causes NFS stale file handles. 2. There is still a FUSE delay because it caches stat data (default for 1 second). For example, if kernel asks for stat data for a file (or directory), FUSE will ask user-space filesystem to provide it. Then if kernel asks for stat data again for same object, if time hasn't expired FUSE will just return the value it read last time. If timeout expired, then FUSE will again ask user-space filesystem to provide it. Hence in our example above, one could remove one of the dirents for a file and then immediately 'stat' the other dirent, and that stat data will not reflect fact that 'st_nlink' is now 1 - it will still say 2. Obviously whether this is an issue depends entirely on timing (the worse kind of bugs). In the FUSE example code there is this comment in regards to hard link support: static void *xmp_init(struct fuse_conn_info *conn, struct fuse_config *cfg) { (void) conn; cfg->use_ino = 1; cfg->nullpath_ok = 1; /* Pick up changes from lower filesystem right away. This is also necessary for better hardlink support. When the kernel calls the unlink() handler, it does not know the inode of the to-be-removed entry and can therefore not invalidate the cache of the associated inode - resulting in an incorrect st_nlink value being reported for any remaining hardlinks to this inode. */ cfg->entry_timeout = 0; cfg->attr_timeout = 0; cfg->negative_timeout = 0; return NULL; } But the problem is the kernel is very "chatty" when it comes to directory listings. Basically it re-'stat's the entire parent directory tree each time it wants to 'stat' a file returned by READDIR. If we have the 'attr_timeout' set to 0, then each one of those 'stat's results in a round trip from kernel space to user space (and processing done by user-space filesystem). I have set it up so that if you enable hard link support, those timeouts are as above and hence you see huge slowdown because of all the overhead. I could remove that code that sets the timeouts to 0, but as I mentioned, not sure what "bugs" this might cause for other users - our policy is, better to be slow than to be wrong. So this is kinda where it stands. We have ideas for fixing but will involve modifying FUSE which is not a small project.
  25. 1 point
    there is not a single cheap drive out there I can back all my critical data up with I currently use three unraid servers for various tasks the shear amount of data i want to back up is over 100TB the absolutely must not lose is approximately half of that (bussines data for home based business) the other half is stuff i could easily download again if I needed too. Also not all data would fry since my main server is also connected to an external Disk Shelf where i plan to add my array pool to about 24 then start stacking my cache pool in a decent raid mode that is supported by BTFRS and unRAID. I have a total of 36 Bays available (12 in the server and 24 in the disk shelf) for the one server the servers are using server grade cases and PSU's not some cheap off the shelf PSU and power is managed by the servers backplane so i would probably lose the backplane before the drives I am totally aware i need an off site solution and very very slowly getting all my data to G Suite but the 7MB/sec speed capp to not go over the 750GB/day upload is going to take a while to get the 76TB in one server and 18 in the other and 7 or so in the last server all uploaded
  26. 1 point
    The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on