Dav3

Members
  • Posts

    44
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dav3's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. This may or may not be relevant: I switched from unraid to ubuntu for various reasons, among which was that I was experiencing this SMB slowdown issue. So this is using the exact same hardware but somewhat different software. I'm experiencing this exact same problem in ubuntu that I had in unraid: In a Windows VM <-> Linux SMB (Samba) all on the same host I see lousy read & write throughput, typically around 15MB/s on hardware that's easily capable of >50MB/s. Typically I see a burst of up to 200MB/s for a few seconds then a crash to 15MB/s. I assume this is write-caching. It may be an entirely different issue, but it does seem to have followed me. I've just lived with the problem; in my case I'm hoping to move to vfio at some point and bypass the whole SMB bottleneck. I don't want to confuse or sidetrack the issue, but to me this seems to suggest it's a more general upstream performance issue and not unraid-specific. Has anyone skimmed the ubuntu/debian buglists for similar performance degradation complaints?
  2. Unfortunately for this and other reasons I also determined that Unraid just wasn't suited for my needs. I moved to Ubuntu with ZFS and use Looking Glass to simplify my GPU-passthru VM setup. Although it's far from perfect, it does everything I need.
  3. I don't really have much to add other than to mention that I'm also having serious performance regressions post-upgrade.
  4. Yeah, I spent about three hours trying to troubleshoot the issue late last night. Results were inconsistent and contradictory so I decided not to bring it up here, just live with it with the hope it resolves or roll-back to 6.7 in in desperation. 🤷‍♂️ Paying for those three hours of lost sleeep today. Hope the gurus figure it out. I'm kinda done losing sleep over the stuff.
  5. Well, I stand by the statement that while switching from cache=off to cache=preferred works as one would expect, switching from cache=off to cache=on does not. It's a corner-case, but an initial sync should be at least offered. I think it's that initial transition that's confusing, not the behavior going forward. Beyond that, a little criticism is probably deserved. It's just been more than a little frustrating spending epic time troubleshooting issues in my attempt to move my development workstation platform off of Vmware Workstation and onto unraid. I had expected to get it done over the Christmas break. It's not a learning curve, it's been a learning El Capitan face-route climb. Far too many times have I been sucked into the linux weeds, fun topics like dumping VGA BIOSes, iptables & go scripts. As it is, I'm in multi-boot hell trying to get work done in the day, 'transitioning' at night & 'breaks' and not enjoying the experience although when the virtualization stuff works it really shows great potential. My wife & kids (& sleep!) have been the ultimate losers. In hindsight, I think my mistake was trying to shoehorn what is (or was) essentially designed to be a media builk-storage NAS product into being a workstation virtualization platform. And my multi-boot 'solution', expecting a short transition period, was woefully mistaken. Nuf said, what I think obviously isn't going to change anything, it's an unproductive religious argument, so on to more productive things...
  6. I see what you're saying, and I did read the forums, but the built-in help often misses my attention. Also the use of 'cache' terminology is a bit of a misnomer, to me cache=yes implies initial synchronization similar to cache=prefer. Like usual I over-thought the issue. As apparently others have. I'd suggest surfacing help a little better, perhaps in a side-bar table element.
  7. I was having the same problem but @Silverbolt's solution fixed it. Thanks @Silverbolt!! +10 UPDATE: Argh, I was wrong. May be a caching thing - I saw normal performance and assumed it was fixed and canceled the copy. And posted the above. Then I restarted the copy and it was normal until the cancel-point, when transfer rate dropped to the 5MB/s I'm seeing. Which sucks. Did anyone ever figure out a fix? Do I need to roll back to 6.6.x?
  8. A note to whoever takes care of an feeds the mover script: I can see in the /usr/local/sbin/mover script that it only looks for 'prefer' not 'yes' to initiate a sync from array to cache.
  9. Oddly, I see 'mount' returning '/mnt/disk1/system/libvirt/libvirt.img on /etc/libvirt type btrfs (rw)' even though no VMs are running... I think this is holding the img file open. Is this correct? Is there some 'VMs are enabled' setting I'm missing? I'm just stopping them.
  10. Ok, I admit I don't understand. Now that I have mover logging enabled, I'm testing simple scenarios and not seeing behavior I expect. Currently all shares are set to cache = no. I set share 'system' to cache = yes and trigger mover (via Main / Move Now button). I see mover start & stop in syslog with no additoinal mover messages. Shouldn't I see mover copying files from /mnt/user/system to /mnt/cache/system? Update, I see that switching from cache = no to cache = prefer does trigger mover copy but setting from no to yes does not. (?)
  11. I agree but in this monopoly-dominated world we live in we need to work with what we get. Until recently I was using charter cable until suddenly without warning AT&T deployed fiber to my area. Yay! Being 40% cheaper & 3x faster, I made the jump. Also 1-Gb + fixed IP address blocks being available, this is everything I always wanted. I was pretty happy. Then after deployment, examining the router (which isn't actually half bad) I realized it was a bit of a Faustian Bargain. However it looks like I can punt the router into 'passthrough' mode, turning it into more of a physical bridge device where I can put my router in front of it and filter LAN traffic away from it. This little task is on my to-do list...
  12. Thanks! I just happened to be reading & considering this post right now. It's very helpful. I'm now seeing the libvirt.img file is being skipped by the mover in the syslog. Yay I can see! Is this 'by design or something wrong? Are there any rules for mover 'skip' decisions besides 'file in use'?
  13. Yeah it sounds like this is the piece of the puzzle I'm missing. I can't find the mover settings... Looking in the usual places, searching the web. Where are mover settings? UPDATE: Found it. under 'Scheduler'... Yep, this should help. Thanks for the tip.
  14. Well, I'm not sure what happened, but after rebooting the server mover now completes instantly. Not sure if this is a good or bad thing, but even though all VMs & docker are stopped I still see /mnt/user/system/libvirt/libvirt.img hasn't been moved to /mnt/cache as expected. Damn, this stuff is opaque. How can I figure out what mover is & is not doing??