jonp

Members
  • Posts

    6442
  • Joined

  • Last visited

  • Days Won

    24

Everything posted by jonp

  1. Ready for these uber-complicated instructions? Just kidding! It's easy! First you'll need to stop the array, then navigate to the Settings > SMB Settings page. From here, modify the SMB Extras section and add the following: server multi channel support = yes aio read size = 1 aio write size = 1 Save the changes and then start the array. WARNING: THIS IS STILL CONSIDERED EXPERIMENTAL! We haven't done sufficient testing with this yet, so feel free to use it, but do so at your own risk. Something else worth mentioning is that according to the Samba project, as recently as a few days ago Samba 4.15-rc2 was released and there was this interesting note in there about multi-channel: https://wiki.samba.org/index.php/Samba_4.15_Features_added/changed#.22server_multi_channel_support.22_no_longer_experimental
  2. I'll work on that this week It shouldn't be a Windows-only thing.
  3. Please disable any ad blockers or other services running in your browser that may interfere with the webGui. Let us know if after doing so the issue is resolved. This definitely feels like a browser-specific issue. Can you recreate the issue when using a different browser or device?
  4. The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow. Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it. And with the potential for ZFS, we have yet another option as it too supports snapshots! There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap
  5. Hey everyone! As you may have noticed, today we put out a release for Unraid 6.10-rc1 and with that release in the wild, we wanted to get feedback from you, our loyal community, on what feature you'd like to see MOST in Unraid 6.11. To better explain the options in the attached poll, here's a breakdown: ZFS File System Ever since the release of Unraid 6, we have supported the use of btrfs for the cache pool, enabling users to create fault-tolerant cache storage that could be expanded as easily as the Unraid array itself (one disk at a time). Adding ZFS support to Unraid would provide users with another option for pooled storage, and one for which RAID 5/6 support is considered incredibly stable (btrfs today is most reliable when configured in RAID 1 or RAID 10). ZFS also has many similar features like snapshot support that make it ideal for inclusion. Multiple Arrays As many of you already know, the Unraid array is limited to 30 total devices (28 data and 2 parity). This limit is set to prevent users from configuring too wide of an array and ending up in a situation where the likelihood of multi-device failure during a rebuild operation is too high. This only is exacerbated by the ever-increasing size of HDDs which further elongates the rebuild process. So how do users with a full 30 disk array further expand? The answer is with multiple array support. This feature would be similar to "multiple pools" which were introduced in Unraid 6.9, but would apply to the Unraid array. Users with multiple arrays could have those arrays still participate in the same shares, allowing the same management but with more storage devices. QEMU-ARM for VMs I know a few people in our community who have personally requested this of us in the past. Adding this to Unraid would allow users to create ARM-based VMs which is ideal for testing out mobile OSes and other platforms. While you won't likely be passing GPUs through here, this is still a very interesting use-case for mobile developers who could use this as a way to test their applications in a variety of scenarios (as well as to gain the benefits of running mobile applications from your server). So make sure you vote in here and let your voice be heard! I know I'm rooting for a very specific feature in this list. What about you?
  6. If you're wondering why you can see this forum but you cannot post in it, its because you haven't linked your Unraid server to your Unraid.net account yet. To do so, install the latest release of Unraid OS and if applicable, the My Servers plugin (can be found on Community Apps). Then make sure you sign in with your forum account. This will give you access to post in this special subforum.
  7. If you're wondering why you can see this forum but you cannot post in it, its because you haven't linked your Unraid server to your Unraid.net account yet. To do so, install the latest release of Unraid OS and if applicable, the My Servers plugin (can be found on Community Apps). Then make sure you sign in with your forum account. This will give you access to post in this special subforum.
  8. If you're wondering why you can see this forum but you cannot post in it, its because you haven't linked your Unraid server to your Unraid.net account yet. To do so, install the latest release of Unraid OS and if applicable, the My Servers plugin (can be found on Community Apps). Then make sure you sign in with your forum account. This will give you access to post in this special subforum.
  9. You might need to wipe the device using the wipefs command if there is a weird partition structure. All the best, Jon
  10. Hi there, In short, yes, you can do this. As far as how Docker containers will operate, that would be dependent on the author of the container and whether or not they preserve that data. We can't speak to how applications will work in containers that we don't personally publish, so you'll need to rely on community support for that, but worse case scenario, you can continue to use your VM-based solution on Unraid using KVM. All the best, Jon
  11. Hi there, It appears you are trying to do this on a system without an integrated graphics device. I see you have two GPUs in the system (a 1080ti and a 3060). If you're trying to use both of those with VMs, you'll need a 3rd device to provide graphics for the host itself or you can try following this advanced GPU pass through tutorial from SpaceInvaderOne: or
  12. Hi there, Sorry to hear you're having problems. When you made the new boot drive, did you do that from scratch or did you simply copy the configuration over from your old drive to your new drive? All the best, Jon
  13. Hi there, Please share your complete system diagnostics with us and your VM settings. Have you tried creating the VM pass through the GPU immediately before installing Windows? You shouldn't have to install Windows using the virtual graphics adapter. In addition, if your underlying hardware doesn't have a built-in GPU, this can also cause a problem. I would suggest checking out the videos from SpaceInvaderOne on YouTube for advanced GPU pass through techniques:
  14. Thank you @bonienl! As always you are a rockstar!
  15. Can anyone here help @Yvan with this request? My Docker networking expertise isn't as strong as some of you here ;-).
  16. Hey everyone, just a quick update on this issue. The main problem we've faced is the inability to recreate this issue in our labs. We are still actively working on it, but if anyone here knows the full solution, we are open to providing a bounty for it. Just PM me and so long as the fix isn't a hack or workaround, we will gladly compensate you for your time and work.
  17. The problem with this question is simply that there is no "right" answer. You can absolutely get away with a 4 core, 8 thread CPU with just the information you provided. But if I inquire deeper about what you're going to be doing in each VM / docker application and what your expectations of performance would be, there could be additional guidance. And even then, knowing how to match those needs exactly with the software you're going to be using can be quite a pain. Docker containers that are running server applications will scale performance based on what you have available. The more cores, the more power. Doesn't mean that any individual app won't work with less power, but those apps will run a bit "slower." That could mean the UI loads slower, actions within the UI happen slower, and bulk operations (extracting files, compressing, encoding, etc.) could take longer. But what's "fast enough" and what's "too slow" is impossible for anyone here in the community to tell you. That is just something you have to decide for yourself. In addition, what those individual containers will be doing, how many jobs they are handling, and how many users are interacting with them will further drive the need for performance. The only two areas we can explicitly guide you are for localized gaming VMs and Plex docker containers. What is specifically unique about those two use-cases is the need for real-time performance. If you don't have sufficient CPUs for those applications, your gaming experience can have low FPS and have hitching/stuttering. For Plex, insufficient resources means you can't transcode fast enough or will be heavily impacted based on the number of users you have. You don't seem to indicate a need for either of these. The main advice I can give you is that if you can afford to bump up your horsepower, go for the 6 core, 12 thread because faster is always better. But if you are looking for someone to give you that silver bullet of "oh at 4 cores it is going to be horrible and at 6 cores it will be amazing" or "4 or 6 doesn't matter", you're not going to find anyone here that will be able to explicitly say that with confidence.
  18. It should be up there! https://podcasts.apple.com/us/podcast/uncast/id1566634831
  19. Totally being honest here, I have no idea what a Docker shim even is ;-). Any additional insights you can provide would help me better triage this issue.
  20. Hi there, It sounds like these are the events and the order in which they have transpired so far: 1) You purchased two new 8TB drives, but from their very first installation, you received errors during the clearing process indicating problems with either the drives, the controller they are attached to, or the cables in between. You tried moving them to different slots and got the same result. 2) You don't state this in any of your posts, but you must have finally gotten it to work to the point where the drives were successfully cleared and added to the array. 3) You had a power failure and when the system rebooted, you say that your drives show up as device disabled, contents emulated. To be clear, if the system had a power failure, on reboot, the system would not have started automatically. Are you saying when you logged into the webGui you either didn't notice or understand the status of the Main tab and therefore started the array not realizing that those two devices were booted from the array? In the event of unexpected power loss, there is a chance of data corruption so when the system reboots, it will not start automatically. You have to manually login to the webGui and start it. In addition, when the system does start, it will automatically perform a parity check to ensure that parity remains in sync with all the devices in the array (indicating that no data corruption occurred). This cannot happen if more devices were ejected from the array than parity can protect. In your case, you have dual parity, so therefore the array could start and emulate the contents of the two missing drives. I would suggest stopping the array, powering off the system, reconnecting those two drives with different cables and different ports. There is clearly something either wrong with the drives, the cables, the ports, the controller, or something else in your hardware as the same two devices have been giving you these issues since first installation.
  21. Thanks for submitting this and including your diagnostics. I will make an effort to reproduce this on my end.
  22. Hmm, gonna have to see if I can obtain a FLAC version. We use riverside.fm for recording which provides a wav or mp3 file, but no FLAC. That said, I have been messing around with Audacity a bit and could probably extract to FLAC from there, but if the original source is MP3 or WAV, not sure if that'll help. Send me a friend request on our discord server and let's connect on this!
  23. Nice! Glad you found something to meet your needs. Never surprises me with this community ;-). As far as a backup solution goes, it's something we've talked about for years, but backup is an entire business in and of itself. There are a plethora of companies/solutions out there for this ranging from the basic like rsync to the enterprise like CommVault. And of course there are a million little variants and specialities in between such as Veaam for VM backups. And on top of that if we want to make use of features like btrfs snapshots or send/receive, that can further complicate things. So while we may eventually bring in a backup solution, that isn't a promise and in the meantime, you can probably find a variety of ways to backup your system with some basic Googling, but here's a nice write-up from our friend over at @spxlabs: https://www.spxlabs.com/blog/2020/10/2/unraid-to-remote-unraid-backup-server-with-wireguard-and-rsync