Jump to content

jonp

Administrators
  • Content Count

    5871
  • Joined

  • Last visited

  • Days Won

    16

jonp last won the day on March 14

jonp had the most liked content!

Community Reputation

279 Very Good

About jonp

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • URL
    http://lime-technology.com
  • Location
    Chicago, IL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I asked a meeseeks and he said you should maybe try upgrading to current stable, see if that fixes it, then upgrade to 6.7 if you're feeling up to it. It's definitely easier than improving Jerry's golf game. Sent from my Pixel 3 XL using Tapatalk
  2. Really need you to update to the latest release to see if the issue is specific to your setup or a bug that was since resolved. Sent from my Pixel 3 XL using Tapatalk
  3. jonp

    Slow NvME Cache Pool

    Something else worth noting: any tests involving the use of a hardware-assisted RAID aren't really something we can help with. We can only help with what we code / manage which is the software-defined array and cache pool. Did you do the tests where you were writing directly to the cache pool and not through a user share?
  4. jonp

    Slow NvME Cache Pool

    That's a heck of a lot of testing you've done, but I don't suspect you're going to get a large amount of feedback from other users in here. The vast majority of Unraid users run on a 1gbps network with traditional HDDs in the array and SSDs in a software-managed btrfs cache pool. No RAID controllers; no 10gbe, no 40gbe; no 100gbe; mostly no NVMe (though more users are starting to leverage those). All of that said, here's what I can tell you about what you're seeing in testing: 1) User shares add overhead to performance. When you are writing to a share, you're going through the linux path of /mnt/user/ShareX. This means that all disks that participate in a share will have their relevant data for that share displayed when navigating to that path. If you navigate to /mnt/diskX/ShareX, you are circumventing the user share file system in favor of direct disk access (same happens if you go to /mnt/cache/ShareX). This will alleviate a lot of overhead and improve data speeds. You can access disks in the array or the cache without going through /mnt/user by turning on Disk Shares, though you lose the flexibility of writing through user shares as a result. 2) We've tuned the OS for 10gbe performance, but you should still test it. Using iperf is the best way to verify that you're getting full 10gbe access speeds through the network. There are many guides online for how to do this, and its a vital step in ensuring maximum performance. We also haven't tested or tuned anything for more than 10gbe, so not sure what else you or we may need to do to improve things there. 3) You're definitely on the bleeding edge here. As mentioned at the beginning of the post, a hardware configuration such as this is not typical for 99.999% of the Unraid user community. We don't market or sell Unraid for use with your type of hardware, though we don't do anything to prevent you from using it either. Definitely open to feedback on how we can improve though.
  5. jonp

    Gtx1660 Passthrough, R720, LOW FPS

    I think the issue here is that you are using a dual socket system. If you're going to do that, you need to make sure that the logical CPUs assigned to each guest VM are aligned with the PCIe devices you are assigning to them. You can use lstopo to determine this from the command line, though I don't think we have a detailed guide just yet. Something we will work up in the future I'm sure. That said, if you do some basic Internet searching on lstopo, vfio, and dual socket CPUs, you should find the guides you need to better tune your virtual machine(s).
  6. Couldn't agree more with @bonienl here. Using consistent and reliable hardware is definitely key to a well functioning system. Cheapening out on parts is obviously a viable way to save money, but it comes with more headache as you update.
  7. jonp

    580 rx won't passthrough but 1070 does.

    Also, generally speaking, we don't recommend using AMD GPUs with VMs in Unraid. Generally speaking, the problems you're having aren't unique to you, but common to the majority of AMD devices. All the best, Jon
  8. jonp

    580 rx won't passthrough but 1070 does.

    Still using i440fx for your AMD VM? Try Q35 please.
  9. jonp

    When installing a new drive, Thanks Unraid.

    Hey sorry for not replying to this sooner. We really appreciate the feedback and support. It's good to know we've built something that's worth it to you!
  10. jonp

    Old/ Test forum still live?

    Thanks for pointing this out. We'll get it sorted.
  11. I deleted that part of my post because it was adversarial and we try our hardest to keep these forums exactly the opposite of that. So did we then confirm this was a configuration problem, not a software bug? Update: Seriously asking and curious how your MTU setting would work previously but not now.
  12. Based on these last two replies, I think the issue that @BennTech is having is clearly specific to his setup.
  13. jonp

    Slow NvME Cache Pool

    Did you figure this out? Sent from my Pixel 3 XL using Tapatalk
  14. @BennTech I get that you're frustrated because you're dealing with troubleshooting an issue, but let's get a few things straight here. Is it possible that your ReadyNAS is outdated and maybe can't support the updates to the SMB protocol in both the software/kernel that are in the latest build's of Unraid? QUESTION TO ANYONE: Does anyone here use Unassigned Devices + Remote Shares with two servers and its working fine? How about anyone with 2 Unraid servers using the latest build? Bottom line: If you or others can reproduce this issue consistently when using remote shares to multiple platforms (not just your ReadyNAS), it may elevate the priority of the issue with us, as we know plenty of folks dependent on this plugin. So far, I only see you complaining which leads me to believe this is likely an issue specific to that ReadyNAS you're using. Maybe we update our OS/kernel/packages far more frequently than ReadyNAS? At this point, we need to see evidence that this affects more than just one user with one specific setup (ReadyNAS + Duplicati backups over SMB/NFS). Can you reproduce the issue just manually copying a large volume of data from a Windows PC to a share? What about over NFS from another Linux client? If you can show this issue has a wider impact to the community, I will look closer and try to replicate, but if you can only show the problem with your specific setup, you're just going to have to wait for future updates on both us and ReadyNAS to see if that resolves the issue in the future (as we update kernels/packages including SMB). Also, what about using ReadyNAS to map remote shares to Unraid instead of the other way around?
  15. You can see in your logs the time stamps for when each event occurs. When you started the array: Mar 26 07:57:58 UNRAID emhttpd: Mounting disks... Mar 26 07:57:58 UNRAID emhttpd: shcmd (37): /sbin/btrfs device scan Mar 26 07:57:58 UNRAID root: Scanning for Btrfs filesystems Mar 26 07:57:58 UNRAID emhttpd: shcmd (38): mkdir -p /mnt/disk1 Mar 26 07:57:58 UNRAID emhttpd: shcmd (39): mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1 Mar 26 07:57:58 UNRAID kernel: BTRFS info (device md1): disk space caching is enabled Mar 26 07:57:58 UNRAID kernel: BTRFS info (device md1): has skinny extents Mar 26 07:57:58 UNRAID avahi-daemon[2539]: Service "UNRAID" (/services/ssh.service) successfully established. Mar 26 07:57:58 UNRAID avahi-daemon[2539]: Service "UNRAID" (/services/smb.service) successfully established. Mar 26 07:57:58 UNRAID avahi-daemon[2539]: Service "UNRAID" (/services/sftp-ssh.service) successfully established. Mar 26 07:57:58 UNRAID avahi-daemon[2539]: Service "UNRAID-AFP" (/services/afp.service) successfully established. Mar 26 07:57:58 UNRAID ntpd[2012]: receive: Unexpected origin timestamp 0xe0449247.322158e8 does not match aorg 0000000000.00000000 from email@removed.com xmt 0xe0449246.df2ec879 Mar 26 07:57:58 UNRAID ntpd[2012]: receive: Unexpected origin timestamp 0xe0449247.32239fe0 does not match aorg 0000000000.00000000 from email@removed.com xmt 0xe0449246.e01d315a Mar 26 07:57:59 UNRAID emhttpd: shcmd (40): btrfs filesystem resize max /mnt/disk1 Mar 26 07:57:59 UNRAID root: Resize '/mnt/disk1' of 'max' Mar 26 07:57:59 UNRAID kernel: BTRFS info (device md1): new size for /dev/md1 is 4000786976768 Mar 26 07:57:59 UNRAID emhttpd: shcmd (41): mkdir -p /mnt/disk2 Mar 26 07:57:59 UNRAID emhttpd: /mnt/disk2 mount error: Unsupported partition layout Mar 26 07:57:59 UNRAID emhttpd: shcmd (42): umount /mnt/disk2 Mar 26 07:57:59 UNRAID root: umount: /mnt/disk2: not mounted. Mar 26 07:57:59 UNRAID emhttpd: shcmd (42): exit status: 32 Mar 26 07:57:59 UNRAID emhttpd: shcmd (43): rmdir /mnt/disk2 Mar 26 07:57:59 UNRAID emhttpd: shcmd (44): sync Mar 26 07:58:00 UNRAID emhttpd: shcmd (45): mkdir /mnt/user Mar 26 07:58:00 UNRAID emhttpd: shcmd (46): /usr/local/sbin/shfs /mnt/user -disks 6 -o noatime,big_writes,allow_other -o direct_io -o remember=0 |& logger Mar 26 07:58:00 UNRAID shfs: stderr redirected to syslog Mar 26 07:58:00 UNRAID emhttpd: shcmd (48): /usr/local/sbin/update_cron Mar 26 07:58:00 UNRAID root: Delaying execution of fix common problems scan for 10 minutes The entire mounting of the disks took less than 2 seconds. After that, it starts services: Mar 26 07:58:00 UNRAID emhttpd: Starting services... Mar 26 07:58:00 UNRAID emhttpd: shcmd (51): /etc/rc.d/rc.samba restart Mar 26 07:58:02 UNRAID root: Starting Samba: /usr/sbin/nmbd -D Mar 26 07:58:02 UNRAID root: /usr/sbin/smbd -D Mar 26 07:58:02 UNRAID root: /usr/sbin/winbindd -D Mar 26 07:58:02 UNRAID emhttpd: shcmd (67): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 20 Mar 26 07:58:02 UNRAID kernel: BTRFS: device fsid 4735afec-9878-471a-aea9-3472810d93c0 devid 1 transid 87 /dev/loop2 Mar 26 07:58:02 UNRAID kernel: BTRFS info (device loop2): disk space caching is enabled Mar 26 07:58:02 UNRAID kernel: BTRFS info (device loop2): has skinny extents Mar 26 07:58:02 UNRAID root: Resize '/var/lib/docker' of 'max' Mar 26 07:58:02 UNRAID kernel: BTRFS info (device loop2): new size for /dev/loop2 is 21474836480 Mar 26 07:58:02 UNRAID emhttpd: shcmd (69): /etc/rc.d/rc.docker start Mar 26 07:58:02 UNRAID root: starting dockerd ... Mar 26 07:58:02 UNRAID dbus-daemon[2971]: [session uid=0 pid=2968] Activating service name='org.a11y.Bus' requested by ':1.0' (uid=0 pid=2964 comm="firefox --profile /usr/share/mozilla/firefox/9n35r") Mar 26 07:58:02 UNRAID dbus-daemon[2971]: [session uid=0 pid=2968] Successfully activated service 'org.a11y.Bus' Mar 26 07:58:45 UNRAID kernel: logitech-hidpp-device 0003:046D:101B.0007: HID++ 1.0 device connected. Mar 26 07:58:49 UNRAID avahi-daemon[2539]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Mar 26 07:58:49 UNRAID avahi-daemon[2539]: New relevant interface docker0.IPv4 for mDNS. Mar 26 07:58:49 UNRAID avahi-daemon[2539]: Registering new address record for 172.17.0.1 on docker0.IPv4. Mar 26 07:58:49 UNRAID kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready Mar 26 08:00:01 UNRAID crond[2038]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null Mar 26 08:01:34 UNRAID emhttpd: shcmd (83): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 Mar 26 08:01:34 UNRAID kernel: BTRFS: device fsid b8382bc5-1fba-4181-9b43-f4d0e8bbc7f4 devid 1 transid 23 /dev/loop3 Mar 26 08:01:34 UNRAID kernel: BTRFS info (device loop3): disk space caching is enabled Mar 26 08:01:34 UNRAID kernel: BTRFS info (device loop3): has skinny extents Mar 26 08:01:34 UNRAID root: Resize '/etc/libvirt' of 'max' Mar 26 08:01:34 UNRAID kernel: BTRFS info (device loop3): new size for /dev/loop3 is 1073741824 Mar 26 08:01:34 UNRAID emhttpd: shcmd (85): /etc/rc.d/rc.libvirt start Mar 26 08:01:34 UNRAID root: Starting virtlockd... Mar 26 08:01:34 UNRAID root: Starting virtlogd... Mar 26 08:01:34 UNRAID root: Starting libvirtd... Looks like all services came up in about 3 and a half minutes. Now if this was the first time you started the array and you are using all spinning drives (no SSDs), and no cache drive, this is typical behavior. You are getting minimal performance at this point. The best solution is to configure a cache drive or better yet, a cache pool, and this way you can improve your performance of services because they won't be bound by the write speed limits of the array with dedicated parity. In addition, the first time you start the array, you are also formatting disks, which will slow down the performance of the system until the formats are complete. I don't see any real issues here, so maybe consider adding a cache drive and seeing if that improves things a bit.