Marshalleq

Members
  • Posts

    907
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Did you try uninstalling and reinstalling the plugin with the latest unraid kernel?
  2. Interesting question, I haven't gone and specifically set anything. However, there are things you can do to help. For example using SSD's with power loss data protection and being careful about what kind of caching you set up (but if you're asking you probably haven't done this yet - and you may never). I guess I'd say it has good sensible defaults that are at least as good as the other options out there, provided you've got some redundancy built in so no, nothing I can think of! UPS will just help even more.
  3. Generally the folders must be set 777 whereas the files can be the permissions you require. Also make sure both files and folders are owned by nobody:users.
  4. Just share them with unraid - the smb extra file in /config.
  5. My advice is to look at the samba config file options. It has a really good example config included usually. There are definitely things you can do to set home drives at the share level. Failing that, you can set them individually via the above config if you don't have too many to do. You can put a dollar sign after the share name to hide them if you like too. Samba is a really really mature product that has been around for decades, if it can be done, it will do it. Just have a browse around e.g. https://docs.centrify.com/Content/zint-samba/SMBConfSample.htm
  6. OK, so want an external windows host to connect to the unraid ZFS dataset via SMB? Nice, this is the best way in my opinion, I have recruited many tech people and the best ones are almost always those whom teach themselves and have a natural interest. And a good dose of 'I don't know everything' to go with it. Yes, I tried TrueNAS, I really really wanted it to be good because they do have the basics covered a lot better than unraid does (I mean backups, file sharing, permissions that kind of thing, but unraid just works better. And I really dislike their docker / Kubernetes implementation - hopefully they will sort it out and we will have a second option). You can share them easy enough as per above method, custom config in the smb.extras file. Permissions however, unraid has basically restricted this to the group nobody.users at the file level and leaves reliance on setting permissions at the share level. I expect they did this because it was just too complicated for the target users of this product to do otherwise. I'm not sure - others may be able to correct me about some hacks, but moving away from nobody.users (i.e. chown nobody:users folder name -Rfv) is likely asking for trouble. By the way the file is /boot/config/smb.extra.conf - I was guessing before. Here's what I add to it to share a ZFS share. I hope it helps you. ``` [temp] path = /mnt/HDDPool1/temp comment = ZFS backups Drive browseable = yes valid users = username write list = username vfs objects = create mask = 664 force create mode = 664 directory mask = 2775 force directory mode = 2775 valid users = username root write list = username root force user = nobody force group = users guest ok = no
  7. Not first, but probably afterward. ZFS does have SMB sharing contained within it, but on unraid I just set them afterward in the /etc/samba/smb.extras file (or a similar name that I can't quite remember). Luckily samba has a really easy to use configuration file. But I just want to make sure I understand you here, what exactly is the requirement for NFS? If everything is contained within unraid i.e. ZFS is on unraid, you don't need to use NFS or samba to do anything to do with ZFS unless you're connecting a remote host or want to mount something inside of a VM that way. Sorry, I don't know what expertise you have and it almost sounds as though you think you need NFS to mount the folders within the same system.
  8. Typically you set this when you create the pool and the datasets fit under the pool's mount via inheritance. That's what I do anyway. However it is possible to set differing mount points afterward if you wish. Example of setting as part of original pool: zpool create Seagate48T -f -o ashift=12 —o autotrim=on -m /mnt/Seagate48T raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sdf Example of setting mount point afterward: zfs set mountpoint=/yourmountpoint/yoursubmountpointetc Seagate48T/datasetname You can also use zfs get Seagate48T/yourdataset to find out what it's currently set to. Hope that helps! Marshalleq.
  9. Hi all, I was just wondering if anyone here had any experience with adjusting the defaults for a linux guest (specifically Ubuntu 20). This is one of the 'basics' that I think Unraid hasn't gotten right yet, or perhaps I don't understand yet, though I don't understand why it would be so hard. Multiple linux guests of various types I've installed seem to have speed issues when it comes to networking. In this case I am doing an email server migration and am uploading email from a windows VM inside unraid connecting to a fresh new linux VM in the same unraid box. Windows reports its speed as 10G, Linux reports as unspecified. I've done iperf test and it reports back at about 200Mb/s (as per below) which is well below 10G as you can imagine. [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 270 MBytes 225 Mbits/sec I'm currently using virtio-net on the linux guest and it'd be nice if at least it reported back a speed of some kind. Is there a trick to Linux guests? I've installed the emu-guest-agent and enabled it. systemctl status qemu-guest-agent ● qemu-guest-agent.service - QEMU Guest Agent Loaded: loaded (/lib/systemd/system/qemu-guest-agent.service; static; vendor preset: enabled) Active: active (running) since Tue 2022-11-08 19:04:50 UTC; 24h ago Main PID: 716 (qemu-ga) Tasks: 1 (limit: 19100) Memory: 1.6M CGroup: /system.slice/qemu-guest-agent.service └─716 /usr/sbin/qemu-ga I have also installed the virtualbox-guest-tools in case that makes a difference. I am wondering if better performance would come from vmxnet-3 though from what I understand to get 10G, it needs to be virtio-net. Reading this thread here, I am led to believe I need to use virtio not virtio-net though I do have dockers sharing the same network so I assume that will create another problem. Also, from that thread as I understand it I shouldn't be getting 10G connection on my windows machine. Or perhaps it does get a 10G connection, but just performs like a 1G connection when virtio-net is chosen. Which still leaves the question as to why I'm not getting an accurate speed report on my linux box. So just thinking it all through, to get 10G - I need to create a new bridge for my vm's and assign them all to that, then choose virtio as the driver? Thanks. skywalker-diagnostics-20221109-1440.zip
  10. OK, anyone know whether this is the exact installer file downloaded from apple as is, or if the IMG is taken out from inside it? I figure I can just download it on the side and replace this file. Thanks.
  11. Hi all, I had this working a while back - just doing it again to do some migration work at the server end because it will be faster. However, the Big Sur download stops at 2GB. I tried Monterey and that stops at 2.4GB. I'm pretty sure these are meant to be more around 7GB right? Anyway seen this happen? Thanks.
  12. Did anyone ever get this working without a VM? I'm looking to throw out my Zimbra mail solution, and potentially move to this or perhaps iredmail, but I think mailcow is better. That said, for simplicity it may be better to have it in a VM - I do have zimbra in a vm already anyway. Thanks.
  13. OK have sent you the cheat sheet, which is sort of my raw notes to refer to when I want to look something up. All of those things have been part of my journey. I wish someone had sent me that in the beginning. That will give you a starting point upon which you can take and discover from. And while some say ZFS isn't for everyone, as far as I know, on unraid it is the only alternative to the unraid array. I don't think mdadm works though it's probably possible. It's also the only viable self healing file system that I'm aware of and it's worth its weight in gold. I had an overheating SAS card once (though I didn't know it). I was getting errors on 5 of the 6 disks in a Raidz1. I couldn't believe how ZFS managed to keep all my data safe through the months it took me to figure that one out. And now I have Raidz2 ps. there is a lot of information there, you don't need most of it. so please don't be overwhelmed. Probably just focus on zpool create and zfs create to begin with. And learn what the options I've listed for those two commands mean.
  14. Ah awesome! You really only need to know two commands. One to create the pool and one to create the dataset. If you plan on expanding it pays to know what the limitations are which these days are not too much. I have a sort of cheat sheet / notes if you want them.
  15. Truenas is probably more complicated than using the zfs command line. I wouldnt go there you’d be disappointed. Also getting a basic zfs system running on y raid is pretty simple really and there are a lot of friendly people on here that would help you. However if you’re the kind of guy that isn’t really into technology or willing to spend a bit of time learning then I might agree wait for official zfs support. Or just get a Mac and put plex on it and use iPhoto.
  16. It's possible you had one very slow disk caused by bad disk or bad cable or similar if there were those kinds of performance issues on the unraid array - if I recall directly it will only go as fast as the slowest drive. It should be enough for playing video media - you really only notice it when you want to copy large amounts of data to or from it. Personally unless you have a need to use a large amount of differing sized disks I wouldn't bother with the unraid array when you are already experienced with a much better tech - ZFS (which is purportedly going to have a much tighter integration in the next version). ZFS will add some really core benefits like actually tell you when things corrupt and offer to repair them - the unraid array will only rebuild a disk really - it doesn't do much else than that. And if it's speed you want - the capabilities it has for SSD mirrors are also pretty awesome.
  17. Yes, the standard unraid array is one big known performance issue, however depending on what you're doing with it it may or may not bother you. Putting ZFS on unraid will indeed get around this if you do it right because basically any array known to man is faster than unraids standard array, but speed isn't what it was made for - it does have a use case. So there is really no need to move from unraid to solve a standard unraid array speed issue - you can simply put zfs on unraid. Your response was a bit confusing because it sounded like you had ZFS on unraid already, but you also said 'standard array' which is certainly not ZFS on unraid (yet). I suspect you'll be back on unraid as the features are pretty much better than everything else. You'll be able to bring your ZFS array over easily if you ever decide to do that. Good luck.
  18. I'm not sure if /mnt/disks is a good location i.e. isn't used for other things that Unraid automates and might it interfere? I just put mine under mount/data /mnt/whatever works good. Fix common problems will always complain as far as I know and you just have to ignore it.
  19. For those that associate the phrase hacking with something negative, I would just like to point out that putting ZFS on Unraid is not at all a hack. The two devs have worked extremely hard on it including with Limetech to make the plugin work and update seamlessly alongside Unraid updates etc. In fact it's them we have to thank for some of the nice new plugin features we are now all enjoying. The fact that ZFS currently has no official GUI, is just how it is supplied at present, (the master code has no GUI by design, however there are ZFS plugins available to help with this). And yes many people come to Unraid for the Unraid array - including myself. But just because you run ZFS, does not mean Unraid has no value. Actually the way that Unraid has implemented docker support and plugins is in a class all of its own. I tried TrueNAS and even participated in the beta of TrueNAS scale to try give it a bit more polish, its implementation of docker is just awful and frustrating to use and second cousin to its installed Kubernetes. Kubernetes is considered to be the king of containerisation on TrueNAS but that is some weird hack implementation that doesn't quite fit well for home installations nor enterprise installations, arguably Kubernetes is really meant for enterprises. So basically I'm just trying to defend Unraid a bit here by saying it's array isn't it's only good feature. And FYI, looking at the latest announcement for the latest unraid version it looks like baked in ZFS by limetech is coming in the next Unraid version. Happy Weekend!
  20. I'm another referred here by the fix common problems plugin. I use tdarr extensively having been in discussion with the developer since the beginning of it's creation. I have never and still do not have any of these issues. However, I do not use the unraid array except as a dummy USB device to start the docker services (I use ZFS) and do not use NFS (I use SMB). I strongly suspect this issue is more about tdarr triggering an unraid bug of some kind than tdarr itself being the issue.
  21. The question should be reframed to ask if it's any kind of server or just unraid servers. I answered for unraid servers, others did not.
  22. @jaylo123It's a known fact that Unraid's 'Unraid' array has dawdling speed. There is no workaround for this. The only solution I can think of (which I have done) is to not use the unraid array. So pretty much on unraid that means use ZFS array. From experience the speed increase was notable. -add to that the remainder of benefits and (to me at least) it's a no brainer. However, despite being very well implemented into unraid, you would need to be comfortable with the command line to use it and be prepared to do some reading on how it works. So it isn't for everyone and I'm not trying to push you one way or the other. I'm just saying the 'unraid' array is known to be extremely slow.
  23. Also try removing and re-adding the zfs plugin. Could also try stable and do it again. But it does work on at least r3 because I'm running that. (Sorry not sure what you're aware of here so I'm just going to say it - make sure you wait until the zfs plugin updates it's module before rebooting after an unraid version change). And if you can boot into normal mode (not safe mode), perhaps you need to drop to the command line and reimport the pools.
  24. It sounds to me like either the server didn't wait for the updated zfs module to get built, or it wasn't able to be built. The quite solution to that might be to downgrade the server firmware to the previous version and go from there. That shouldn't be too hard. Failing that, ping either steini84 (or possibly others know) where there is a downloadable matchable version along with where to put it to force the process. Sorry I don't know the process, it may be indicated somewhere in this thread though. Marshalleq
  25. No problem. I haven't actually ever used the GUI options and my setup may be very slightly different. For example when I create an array, I set the mount point of the array simply into /mnt - it seems like you've put yours into /mnt/user/zpool. I am not sure if that's in a guide or what, but it doesn't seem like a sensible way of doing it as you may have permissions problems being that it's a user folder. Also, clearly you're also talking about SMB sharing. I found that the unraid smb sharing doesn't work with ZFS really, but luckily zfs has it's own SMB built in - so you can edit the smbextra file in /etc somewhere - sorry can't look it up exactly at the moment - I think it's like /etc/samba/smb.conf/smbextra.conf or something like that. Don't worry the smb format has examples and is super easy. Also for me, any zfs shares in unraid under /mnt I just set them to nobody.user and that sorts them out. having them as root definitely doesn't work. Just always bear in mind with linux, folders must always be set to 777, and the files can be whatever you need. Hope that sort of points you in the right direction.