DaKarli

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by DaKarli

  1. Hi tkohhh, sorry for my late reply but I have turned off notifications... Maybe you already solved it for yourself but here's an explanation for all: Case A: You already HAVE a [global] block in this configuration, you just add that line to this block. Case B: If your configuration is empty or missing the [global] block, you add the block AND the line. 🤗
  2. I just wanted to add, that what you describe in the a.m. first sentence is a "regular" Windows behaviour. Simply said, accessing the same fileserver, once by its name and then by its IP address are counted as two different connections for Windows. And this is the reason why you had to enter your credentials one more time and why, in this case, it solved half of your problem. Regards, DaKarli. 👍
  3. Attention! Even though the a.m. solution will work, it still is just a workaround.... 😉 There is a better, officially supported way of adding a Microsoft @ account to a SAMBA server. Go to this thread/message where I described it in more detail: Have fun and with best regards DaKarli.
  4. Attention! Don't make the a.m. changes to the shadow / passwd files! Even though it will work, it is only a bad workaround.... 😉 There is a better, officially supported way of adding a Microsoft @ account to a SAMBA server. Go to this thread/message where I described it in more detail: Have fun and with best regards DaKarli.
  5. How to allow using a Microsoft Account (e.g. [email protected]) to access a share instead of using a Windows Local User? -> Easy to achieve: Create a directory/file like this on your Unraid boot stick: If you do this from the Unraid shell, the owner/group of the file should be root:root with a security mask of 0600 automatically. mkdir /boot/config/custom/etc/samba nano /boot/config/custom/etc/samba/usermap.txt Remember, the user you want to map to your Windows user has to already exist in Unraid or it has to be created first in the Unraid GUI. Now that you are in the editor for the file, add a line like the following for every user you want to map. Unraid(Linux) user on the left, Microsoft Account user on the right: user = [email protected] Save the file and close the editor. Now go to your SMB settings in the GUI and add the following line under the [global] settings configuration of your smb-extra.conf file: Note: the path and filename must match to what we have created right before... username map = /boot/config/custom/etc/samba/usermap.txt Now you can either restart your server or go to the shell and just restart your Samba server with: samba restart Now everything should work as expected and you should no longer have a problem to use your [email protected] Microsoft user account to login into your Unraid Samba share. Those who formerly followed the hint to create a local user account under Windows can now revert back to using a Microsoft Account. Though a Windows local account is less "chatty" with regards to sending data to Microsoft, such an account is also less useful regarding a lot of functions which need to be synced over a cloud-sync sevice. Have fun, with best regards DaKarli
  6. Hi, first of all I just wanted to report that the problem already mentioned for rc2 (see link) still exists. A samba restart, as suggested, solves it until the next reboot or next VM (re)start. https://forums.unraid.net/bug-reports/prereleases/6100-rc2-wsdd2-does-not-appear-to-be-working-r1627/ But I did a little more debugging and fiddeling (all with a Win 11 VM): 1. After a clean reboot of UnRaid, starting the VM where the wsdd2 discovery is needed, it doesn't work until you restart samba. 2. Restarting samba while in the VM (->discovery works), then restarting the VM, again, discovery does not work until samba is restarted. 3. Restarting samba right before starting the VM doesn't solve the problem. -> All without any error messages in the log. Looking at: https://github.com/Netgear/wsdd2/issues/36 it seems that the problem with wsdd2 has been introduced with the newer versions of wsdd2 and someone suggests to go back to the 2020 version to get it working again. -> I don't know if Unraid is using this version of wsdd2 from this Git Repository but as this is the only one written in C++, I think I could be right. -> unfortunately, I am no developer and have no idea how to compile an older version to test if this may work. => For me the used version is simply broken and either needs and update (in fact it is more of a downgrade...) or needs to be replaced by a working alternative... After continued reading I found out that the discovery service wsdd is providing is completely independent of samba and even works without samba being started. I also found this alternative: https://github.com/christgau/wsdd and thought: "give it a try". TL;DR: It works! According to their issues tracker they faced the same problem but contrary to the wsdd2 they solved it as far as I can tell. What did I do to test and verify: 1. Disabled WSD in the SBM Settings 2. Installed Python3 through the Nerd Pack 3. Restarted the server to ensure a clean system. 4. Downloaded the Python script from Christgau's Github and just put it somewhere (good enough for testing purposes) 5. Started the script without "daemonizing" it directly in the shell with wsdd -i br0 (this is my network interface) 6. Started the Array, started a Win 11 VM, started the File Explorer. And voilá the file server appeared instantanously under "Network". 7. Because in this case the Samba server was already running in the background on Unraid, so in my Win 11 VM I opened/created some files/directories all without problems. 8. I shut down the VM, restarted it and did the same test once again. And guess: All worked and still worked after trying several times. => This script works well and could be a (at least interim) solution for the problems with wsdd2 and sould be considered to be put into the next possible release. Of couse, all I have done is a bad hack and a final implementation hast to be fine-adjusted by the Unraid developers. The script includes rc.d instructions and behaves nearly the same as the the broken one in regards of the configuration so I believe this script could be implemented quite fast if it fits into the Unraid frame. But that the Devs have to decide. In case there is no fast solution and some of you want to have this NOW, you could consider to run this script by using the go startup-file... As I said, the wsdd discovery service is independent from Samba and you can turn the integrated wsdd off. So two of the most important prequisites are already fulfilled and the rest doesn't seem too complicated to be put in the go file but I haven't tried this and will wait for a feedback from the Unraid Dev's first as how fast they are going to implement this if at all. For those unsure about what to do: I have chosen to report this problem as an Annoyance because even with the broken wsdd2 the Samba shares can always be reached by simply using \\myServer\ShareName in the Windows File Explorer and these shares can be mapped to a drive without any problem the same way. Nevertheless, for those who are configuring a Samba share the first time, it could be a big hurdle NOT seeing the server and his shares in Windows at the first moment, thinking something must be wrong with their Samba setup and trying to figure out what the problem is. I think this is what annoying means here. Additionally I have seen a lot of hints telling to (re)enable SMB1 protocol on these Windows machines to solve the problem - and in fact, it solves the problem. But be WARNED, there is a very good reason why Microsoft turned off SMB1 after years of still enabling it because SMB1 is a #1 security risk! So let's not use this bad hint at all and go for a better soultion than this which mine could be. Have fun, with best regards DaKarli.
  7. @ich777 Just a quick question: After changing to "directory" instead of docker.img I see a lot of datasets created under my ZFS datapool. Do you see/have the same outcome?
  8. Hey ich77, yes I overlooked this line. Ok, now I understand why your Dockers are working... Because they are using the regular file system and are not mounted within a .img. That should be very fine with ZFS as well. Finally I'll switch to exactly this as I see no advantage of using an .img file for Docker compared to a directory. Regarding the shutdown you are right, one should use the given shutdown script which in this case is the powerdown cmd even though in fact it does nothing special. For me testing this system I actually don't need to care as my only intention is to bring down the system by any means... 😈 In the meantime I found an empty USB stick and tried what I said above. Did a clean zfs create without any special options and with testpool as its name. In short without further explanation or screenshots: Still didn't work this way. I believe it has something to do with the way UnRaid is writing to the docker.img file while this file sits on ZFS. First I thought it may additionally has to do with the btrfs filesystem inside the docker.img file which causes trouble when on ZFS, but trying XFS didn't work either. Hangs at the very same point. So for the moment this will be the end of my investigations because a working solution has been found which has no drawbacks in my opinion. Thanks to all for being with me, with best regards DaKarli
  9. But looking at your docker.cfg it looks like your docker.img resides on your ZFS as well, so in the end we have the same setup regarding the docker.img and appdata path. Or do I get something wrong? This shouldn't be a problem as my pools had this name-scheme since long time ago: https://docs.oracle.com/cd/E23824_01/html/821-1448/gbcpt.html Nevertheless I'll give it a try when I have a spare harddisk or maybe on a USB stick. I also limited the memory use of ARC cache and my ZFS pools are created as mirrors like this: zpool create -O compression=lz4 -o ashift=12 -o autotrim=on -O xattr=sa -O acltype=posixacl -O aclinherit=passthrough -O atime=off -O relatime=off -O dnodesize=auto -O mountpoint=/mnt/z-syspool -O normalization=formD syspool mirror nvme1 nvme2 The same for the dataset: zfs create -o snapdir=visible -o recordsize=16k -o mountpoint=/mnt/z-syspool/UnRaid syspool/UnRaid ...so nothing special or out of scope for zfs here. Nevertheless, I'll also try a plain flat standard ZFS pool if I have a spare disk. Made no difference - a hard reset is still necessary. I only used "shutdown" because it is one of the linux standard tools you can use and it uses a shutdown procedure if you set one up. Actually looking at what "powerdown" actually does, I found something very interesting: /usr/local/sbin/powerdown #!/bin/bash logger "/usr/local/sbin/powerdown has been deprecated" if [[ "$1" == "-r" ]]; then /sbin/reboot else /sbin/init 0 fi So even this command does nothing else than reboot or init 0 a system - the same what shutdown does. Due to the fact that the system does nothing and does not reboot, I use a hard method to bring the system down by using this (!! use with extreme caution on a work system !!) echo 1 > /proc/sys/kernel/sysrq echo b > /proc/sysrq-trigger As a result with this I don't have to push the Reset button... To see some debug or whatever, I tried to run the docker run command above directly in a shell with the -D option, but with or without, I dont get any debug information and the system simply hangs at this screen: At this point, my knowledge about Docker and what to do if it doesnt work, ends... And I don't know if I have enogh time to dig deeper to find out what stops Docker to install/start my Docker image (in this case I've choosen Krusader). As a last measure I will try to do all of this on a clean ZFS setup (as mentioned above) without my optimizations and see if that helps. I'll come back here to tell 🙂 Regards DaKarli.
  10. dlandon, thanks once again for the further clarification. So in fact what I've done was already fine and counts as a proper mount, right?: #mount ... syspool/UnRaid on /mnt/disks/z-syspool/UnRaid type zfs (rw,noatime,xattr,posixacl) tmpfs on /mnt/disks type tmpfs (rw,relatime,size=1024k,inode64) ... Regards DaKarli
  11. Unfortunately, Docker on ZFS still gives me trouble (just tried again to confirm, see the following pictures and description). VMs on ZFS work, even with the libvirt.img residing on ZFS but as a security measure I still keep the libvirt.img in the standard folder to ensure it is not the .img which causes trouble. Setting Docker to a path on my ZFS pool the installation e.g. for Krusader fails/hangs at the shown point and the GUI does not respond anymore. I can only access the machine on the host terminal and even shutdown -r -n now does not work, so a hard-reset is necessary. I tried to install Krusader withContainer Path = /mnt/z-syspool/UnRaid/appdata/krusader/ My docker.cfg looks like this: The docker.log: The error.log (IP addresses and names have been "x"ed): By the way, the same error could be seen with IPv4 so IPv6 is not the reason. File permissions also look ok on the regular share and on the ZFS share: Acccording to the error.log, I'd say it is not a problem with the .img files themself but more of a problem somewhere in the way PHP is accessing these files? If there is something else you want me to try, just drop me a line. Regards DaKarli. ..
  12. @andber München-Konstanz wär ja kein sooo großes Problem, wären da nicht aktuell die Spritkosten... und leider fährt meine Kiste weder mit Kaffee, noch Bier und Bounty funzt auch net...😋 Aber vielleicht helfen dir folgende Hinweise. Du hast bei ZFS Pool ja aclinherit=passthrough gesetzt. Das genügt womöglich nicht, wenn du unter Windows mit AD eine Rechtevererbung (ACL) nutzen willst, während das "original" Dateisystem durch Samba "emuliert" werden soll. Mein heißer Tipp wäre daher, zusätzlich noch folgende Zeilen in der SMB Extras Konfig einzutragen: Alle gleichlautenden Optionen würde ich besser unter dem "global" Attribut zusammenfassen, dann gelten diese gleich für alle Shares. Für den Share selbst braucht's dann nur noch das path= Argument. Macht die Sache viel Übersichtlicher... ... #unassigned_devices_end [global] vfs objects = acl_xattr map acl inherit = yes inherit permissions = yes inherit acls = yes store dos attributes = yes ... das restliche Zeugs ... [spider] path=... Bin mal gespannt, ob das alleine schon die Magie bewirkt... (Sonst muss ich hier glatt mal mein AD aus der Mottenkiste kramen um das in Verbindung mit UnRaid zu testen...) Grüße und Servus DaKarli.
  13. Thanks @dlandon for the explanation. Another puzzle completing the big picture. Just as I thought, it's too early yet... 😉 The good thing with ZFS is the fact, that even if everything changes in the way how Unraid will implement it, a ZFS pool can simply be exported/imported with just two commands and so you are able to run your Z-Pool on ANY system that supports ZFS. This file system is rock-solid and really bullet-proof since I started to use it back in about 2007/8 on Solaris 10... 😎 ...now gonig to upgrade to Unraid 6.10 rc-4 to see what has changed under the hood... 😋
  14. Hi Aleks, gib im Terminal mal zfs list ein, dann siehst du alle Mountpoints, die du beim Erstellen des ZFS -Pools und der ZFS Datasets vergeben hast. Ein (einfaches) Dataset erstellt man mit: zfs create -o mountpoint=/xxx/yyy poolname/dataset Wobei xxx und yyy zwar beliebig sein könnten, du aber besser für xxx den Name deines Z-Pools nimmst und für yyy den Namen des Datasets, damit es übersichtlich bleibt. Ein "normaler" Mountpoint wäre z.B. /mnt/MeinZPool/MeinDataset. Vermeide aber wenn möglich relativ allgemeine Namen wie "FILES" für Pool und Dataset bzw. Mountpoints, da diese manchmal zu Problemen führen kann, wenn diese Namen im System für etwas anderes reserviert sind (siehe z.B. https://docs.oracle.com/cd/E19253-01/819-5461/gbcpt/index.html). Und nutze als Namen für deinen ISO Share nicht den Namen "isos", denn dieser ist bereits von Unraid reserviert und führt zu Problemen, über die ich auch schon stolpern musste. Sind Deine Z-Pools/Datasets bereits gemounted, kannst du die Mountpoints nachträglich noch ändern - natürlich mit der Warnung, das alle Konfigurationen, die diesen Mountpoint nutzen, ebenfalls angepasst werden müssen, so wie z.B. deine smb.config! Damit umountnest du alle Dataset Mounts auf einen Schlag: zfs unmount -a und kannst mit diesem Befehl den Datasets einen neuen Mountpoint verpassen: zfs set mountpoint=/mnt/xxx/yyy poolname/dataset um zuletzt mit folgendem Befehl wieder alle ZFS Pools/Datasets ins System zurückzumounten: zfs mount -a ÜBRIGENS: ZFS stellt dem System die Mountpoints selbständig und automatisch zur Verfügung und die Mountpoints müssen NICHT in der /etc/fstab eingetragen werden! Siehe https://zfsonlinux.org/manpages/0.7.13/man8/zfs.8.html unter "Mount Points": "...ZFS automatically manages mounting and unmounting file systems without the need to edit the /etc/fstab...". Abgesehen davon klingt der Fehler, den du oben beschreibst, irgendwie nach Quota, daher stell dir die Frage, ob du irgendwo Quota Regeln eingestellt hast, die somit auch Quelle des Problems sein könnten. Das kann dann entweder beim Erstellen des ZFS selbst eingestellt worden sein oder auch irgendwo in deiner Unraid Konfiguration. Falls ja, überleg dir ob du Quotas wirklich benötigst und schalte sie ggfs. ab. Darüber hinaus kann ich bei diesem Fehler aber auch nicht weiterhelfen, falls meine anderen Tipps nicht helfen. Last not least solltest du die ZFS Shares besser nicht so nutzen wie es oben scheint: sondern lieber immer in hierarchischer Form: /zfspool /zfspool/dataset1 /zfspool/dataset2 /zfspool/iso-dateien etc... Und very last not least würde ich für jedes Unterverzeichnis, das du erstellen willst, ein eigenes Dataset unter ZFS erstellen. Also quasi nicht "nur" einen Z-Pool (ohne jegliche Datasets, was prinzipiell ginge) erstellen und dann über Windows alle weiteren Unterverzeichnisse (z.B. ISOs, System, VMs, Games, Music, etc. ) anlegen, sondern diese "Vor"-Arbeit in ZFS machen. Vorteil: Du kannst jedes dieser Datasets für die Art der Daten, die darin liegen sollen (z.B. viele kleine Dateien vs. viele große Dateien) speziell anpassen, was zusätzliche Performance bringt und zudem die Möglichkeit bietet, gezieltere ZFS Snapshots pro Dataset (und eben nicht nicht Snapshot "Alles") zu erstellen. Kleiner Nachteil: Den Ordner, den du als Dataset erstellst, kannst du unter Windows natürlich nicht mehr direkt löschen. Aber wer will das schon, wenn alles mal geordnet ist? Viel Spaß beim Umsetzen DaKarli
  15. @dlandon@Squid@ich777 I've put you three on copy because I think the question "which is the best mountpoint for zfs" is something which has to be discussed and needs a clear statement because of the hopefully soon to be realised plans to natively integrate zfs as an underlying filesystem though I also have a good understanding that maybe we may be in an too early stage for this discussion to happen... I am aware that I can mount my zfs whereever I want (technically speaking) and during my first steps with Unraid I realized that Unraid behaves differently from what I have seen with other linux systems. To be honest, though reading through a lot of posts in this forum I still don't completely understand how Unraid handles mounts, config files, its complete bootup and the fact that the system resides in RAM after boot, etc. I have a quite good knowledge of all the technology involved but until I have the big picture here, I am poking a little bit in the mist... My first attempt to mout zfs was under / but I realized I couldnt choose e.g. the VM path in the file picker if it resided there. My second approach, after disabling user shares in global share settings, was to mount my zfs under /mnt/disks/zfs (as suggested) which allowed me to use the file picker but led to problems with docker on my zfs share (I've read the warning but nevertheles tried it). Re-enabling user shares and putting docker back onto a regular Unraid array share solved this but now i got into trouble with the file picker as it only showed the paths provided by "user shares" (appdata, domains, system, isos). So the file picker was worthless for what I intended. Not being happy with such a long path for my zfs mount I went back to mount zfs under /mnt/zfs which led to UD warning me about that... Ok, I can ignore that if I want to but on the other hand, what is the impact? Now @dlandon said, UD sets write limits on the /mnt/disks path! So this absoultely means it would not be advised to put zfs in this path! To sum up, and as I am (for the moment) ok with not being able to make use of the file picker, I am gonig to leave my zfs mounted under /mnt/zfs and will click to ignore the warning in Fix Common Problems. What I don't get is what you mean by "...the ZFS mounts have to happen after UD has installed...". I installed UD, then I created my ZFS pools and datasets with the mountpoints at /mnt/zfs-pools/zfs-datdasets - and for the moment see no problems with that. Or is it something else you mean by that?
  16. Unfortunately if you do so, the "Fix Common Problems" plugin (see spoiler picture) gives you this warning which, of course, you could gently ignore... But on the other hand, why would it throw this warning if it makes no sense? So I'd second the question of which would be the right path to mount zfs pools? Maybe someone of the UnRaid devs could have a look and answer... Thanks!
  17. Ok, so if the script exec engine checks this before, like I said, it may be no problem. So it would be no problem either to manually delete old entries from the .json file if someone wants to go this extra step. Regarding your 2nd point, I just want to add that the risk is somewhere else. If you add a different script but with the same name, you may not be aware that this new script will be executed by the old line in the .json file right after you saved that file. But just as I already said, these may be rare circumstances and I just wanted point to possible risks with the handling being like it is. Cheers ! 😉
  18. Hi @Squid I just found out something weired and maybe of interest regarding the behaviour of the "User-Script" plugin. If I just regularly delete a script in the GUI, the script itself is being deleted in /boot/config/plugins/user.scripts but within the schedule.json file the scripts definitions from the deleted scripts are still there. (see picture in spoiler: the marked ones have been deleted before) In fact, as long as the script-file itself does not exist any longer, it would not be a problem. But 1.) every time this schedule.json file is loaded by the plugin, it loads old, already unnecessary stuff. And 2.), going one step further, imagine what could happen if you create a script with the same name or the same file-name or if you create a script manually and put it into this directory with the same file name? Of course, all this may be a problem just in rare circumstances but in my opinion the file and config handling of plugins in such important systems like a NAS should be as correct as possible to prevent from unintended errors as good as possible. At this point, nevertheless, a *BIG* thank you for this plugin. It helps a lot! Regards DaKarli. 🤗
  19. Hi SimonF, yes, as slave it would be no problem... But I need to keep UPS on this machine as it is the one switched on 24/7. And yes, the file list is just blank as in your screenshot. Any hints about this?
  20. @SimonF@dmacias Hey Dev's, thanks for your efforts to bring this important plugin to UnRaid and keep it running! Unfortunately for me it seems to be broken at least in one point on the latest Unraid Version: 6.10.0-rc3 (Why this? I need it for Win 11 and TPM!). Problem: The NUT Config Editor has no access to the config files in /etc/nut. Maybe this is due to (missing) permissions within the jqueryFileTree script or due to generally not being allowed to open system folders - which makes sense with security in mind. The file bar just displays nothing. Additionally, manually configuting the config files in /etc/nut fails as they (seem to) get overwritten by the plugin when restarting the plugin or system. Last not least I tried to change the line in NUTEditor.page from "data-pickroot="/etc/nut" to "./nut" so it may takes the files in "/usr/local/emhttp/plugins/nut/nut" which seem to be the "master-files" for the mentioned overwriting but that won't work either. As a last measure I am going to edit these "master-files" to what I need and hope it will work that way. I am using a CyberPower VP700ELCD and this thingy needs a lot of tuning regarding the NUT-settings e.g. needs a different driver.parameter.pollfreq and driver.parameter.pollinterval as well as some override tweaks for battery.charge.low battery.runtime.low with ignorelb to run smooth and safe. I hope you can reproduce my error and provide the community with some updates to the plugin so that this important thing works again as expected. For a NAS one of the most important things is a working UPS and a working procedure to safely shutdown a system if power fails. Thanks and regards DaKarli Update: After editing the a.m. master-files the system does not work, in fact does nothing at all and only shows the status STOPPED....