Jump to content

DaKarli

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by DaKarli

  1. On 12/21/2022 at 1:35 AM, tkohhh said:

    Are you saying we should add the following text to that text block?

    username map = /boot/config/custom/etc/samba/usermap.txt

     

    Or are you saying we need to add a [global] section to that text block, along with the username map, like so:

    [global]
    username map = /boot/config/custom/etc/samba/usermap.txt

     

    Hi tkohhh, sorry for my late reply but I have turned off notifications... ;-)

    Maybe you already solved it for yourself but here's an explanation for all:

     

    Case A: You already HAVE a [global] block in this configuration, you just add that line to this block.

    Case B: If your configuration is empty or missing the [global] block, you add the block AND the line.

    🤗

  2. On 5/9/2020 at 10:13 PM, mgutt said:

    And it becomes even more strange. I opened the share through its IP-address instead, entered the credentials, and are now able to open the subfolder without problems 🤪 

     

    Seems to be windows related or what do you think?

     

    EDIT: Ok, Windows Reboot solved the issue. Ok, thank you Windows 🙄

     

     

    I just wanted to add, that what you describe in the a.m. first sentence is a "regular" Windows behaviour.

    Simply said, accessing the same fileserver, once by its name and then by its IP address are counted as two different connections for Windows.

    And this is the reason why you had to enter your credentials one more time and why, in this case, it solved half of your problem.

     

    Regards, DaKarli. 👍

  3.  

    Attention! Don't make the a.m. changes to the shadow / passwd files!

    Even though it will work, it is only a bad workaround.... 😉

     

    There is a better, officially supported way of adding a Microsoft @ account to a SAMBA server.

     

    Go to this thread/message where I described it in more detail:

     

     

    Have fun and with best regards

    DaKarli.

  4. How to allow using a Microsoft Account (e.g. [email protected])  to access a share instead of using a Windows Local User?
    -> Easy to achieve: ;-)

     

    Create a directory/file like this on your Unraid boot stick:
    If you do this from the Unraid shell, the owner/group of the file should be root:root with a security mask of 0600 automatically.

     

    mkdir /boot/config/custom/etc/samba
    nano /boot/config/custom/etc/samba/usermap.txt

     

    Remember, the user you want to map to your Windows user has to already exist in Unraid or it has to be created first in the Unraid GUI.

    Now that you are in the editor for the file, add a line like the following for every user you want to map.

    Unraid(Linux) user on the left, Microsoft Account user on the right:

     

    user = [email protected]

     

    Save the file and close the editor.

     

    Now go to your SMB settings in the GUI and add the following line under the [global] settings configuration of your smb-extra.conf file:

    Note: the path and filename must match to what we have created right before... ;-)

     

    username map = /boot/config/custom/etc/samba/usermap.txt

     

    Now you can either restart your server or go to the shell and just restart your Samba server with:

     

    samba restart

     

    Now everything should work as expected and you should no longer have a problem to use your [email protected] Microsoft user account to login into your Unraid Samba share.

     

    Those who formerly followed the hint to create a local user account under Windows can now revert back to using a Microsoft Account.

    Though a Windows local account is less "chatty" with regards to sending data to Microsoft, such an account is also less useful regarding a lot of functions which need to be synced over a cloud-sync sevice.

     

    Have fun, with best regards

    DaKarli

    • Like 2
    • Thanks 2
  5. 57 minutes ago, ich777 said:

    I think you overlooked this line:

     DOCKER_IMAGE_TYPE="folder"

    I don't have a docker.img

     

    In the above location are the files and folders that sit in your case in the docker.img but in my case without a image, they are stored directly in this path.

     

    Yes sure this is the namig scheme from the ZFS documentation but maybe something on Unraid doesn't like that or Docker... Whatever the case may be, at least you can try it. ;)

    As said, I run Docker and everything from my ZFS Mirror without a issue so far.

     

    Yes, but if I'm not mistaken this is the recommended way to shutdown Unraid, but I think that there are many ways to shutdown Unraid. ;)

     

     

    Hey ich77,

    yes I overlooked this line. Ok, now I understand why your Dockers are working... Because they are using the regular file system and are not mounted within a .img. That should be very fine with ZFS as well.

    Finally I'll switch to exactly this as I see no advantage of using an .img file for Docker compared to a directory.

     

    Regarding the shutdown you are right, one should use the given shutdown script which in this case is the powerdown cmd even though in fact it does nothing special.

    For me testing this system I actually don't need to care as my only intention is to bring down the system by any means... 😈

     

    In the meantime I found an empty USB stick and tried what I said above.

    Did a clean zfs create without any special options and with testpool as its name.

     

    In short without further explanation or screenshots: Still didn't work this way.

     

    I believe it has something to do with the way UnRaid is writing to the docker.img file while this file sits on ZFS.

    First I thought it may additionally has to do with the btrfs filesystem inside the docker.img file which causes trouble when on ZFS, but trying XFS didn't work either. Hangs at the very same point.

    So for the moment this will be the end of my investigations because a working solution has been found which has no drawbacks in my opinion.

     

    Thanks to all for being with me, with best regards

    DaKarli

  6.  

    17 minutes ago, ich777 said:

    As said above, I've run a Docker Path and not the Docker Image on my ZFS Pool (/mnt/nvme)

    But looking at your docker.cfg it looks like your docker.img resides on your ZFS as well, so in the end we have the same setup regarding the docker.img and appdata path.

     

    42 minutes ago, ich777 said:

    DOCKER_IMAGE_FILE="/mnt/nvme/system/docker/"

    Or do I get something wrong?

     

    17 minutes ago, ich777 said:

    also maybe try to use a pool that has no special characters in it's path, only letters and numbers,

    This shouldn't be a problem as my pools had this name-scheme since long time ago: https://docs.oracle.com/cd/E23824_01/html/821-1448/gbcpt.html

    Nevertheless I'll give it a try when I have a spare harddisk or maybe on a USB stick.

     

    18 minutes ago, ich777 said:

    Only having one "test" Windows 11 VM image on my ZFS Pool, maybe it's also related how you set up the pool. I'm only running a mirror without anything special but with that parameters in the go file:

      Hide contents

    # ZFS Tweaks, Autotrim & 8GB ARC memory limit
    zpool set autotrim=on SSD
    echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max

     

    I also limited the memory use of ARC cache and my ZFS pools are created as mirrors like this:

    zpool create -O compression=lz4 -o ashift=12 -o autotrim=on -O xattr=sa -O acltype=posixacl -O aclinherit=passthrough -O atime=off -O relatime=off -O dnodesize=auto -O mountpoint=/mnt/z-syspool -O normalization=formD syspool mirror nvme1 nvme2

    The same for the dataset:

    zfs create -o snapdir=visible -o recordsize=16k -o mountpoint=/mnt/z-syspool/UnRaid syspool/UnRaid

    ...so nothing special or out of scope for zfs here.

    Nevertheless, I'll also try a plain flat standard ZFS pool if I have a spare disk.

     

    55 minutes ago, ich777 said:
    powerdown -r

    Made no difference - a hard reset is still necessary. I only used "shutdown" because it is one of the linux standard tools you can use and it uses a shutdown procedure if you set one up.
     

    Actually looking at what "powerdown" actually does, I found something very interesting: ;-)

    /usr/local/sbin/powerdown     
    
    #!/bin/bash
    logger "/usr/local/sbin/powerdown has been deprecated"
    if [[ "$1" == "-r" ]]; then
      /sbin/reboot
    else
      /sbin/init 0
    fi

    So even this command does nothing else than reboot or init 0 a system - the same what shutdown does.

     

    Due to the fact that the system does nothing and does not reboot, I use a hard method to bring the system down by using this (!! use with extreme caution on a work system !!)

    echo 1 > /proc/sys/kernel/sysrq
    echo b > /proc/sysrq-trigger

    As a result with this I don't have to push the Reset button... ;-)

     

    To see some debug or whatever, I tried to run the docker run command above directly in a shell with the -D option, but with or without, I dont get any debug information and the system simply hangs at this screen:

    Spoiler

    grafik.thumb.png.69c93a04d723a9704dff8c713b4b779d.png

    At this point, my knowledge about Docker and what to do if it doesnt work, ends...

    And I don't know if I have enogh time to dig deeper to find out what stops Docker to install/start my Docker image (in this case I've choosen Krusader).

     

    As a last measure I will try to do all of this on a clean ZFS setup (as mentioned above) without my optimizations and see if that helps.

    I'll come back here to tell 🙂

     

    Regards

    DaKarli.

  7. 18 hours ago, dlandon said:

    For the record, UD does not limit writes to deviices properly mounted at /mnt/disks/.  The protection is for incorrect writes directly to /mnt/disks/ that end up in the tmpfs.  Those writes would not be written to a device, but instead to ram file system.

    dlandon, thanks once again for the further clarification.

     

    So in fact what I've done was already fine and counts as a proper mount, right?:

    #mount
    ...
    syspool/UnRaid on /mnt/disks/z-syspool/UnRaid type zfs (rw,noatime,xattr,posixacl)
    tmpfs on /mnt/disks type tmpfs (rw,relatime,size=1024k,inode64)
    ...

    Regards

    DaKarli

  8. 19 hours ago, ich777 said:

    Do I understand that right that you've had issues with your Docker(s) when using the path /mnt/disks/whatever?

    I now run basically everything from ZFS (mounted to /mnt/nvme), my Docker path for the Docker images, libvirt image, appdata, domains, system,... and had never had a single issue with it.

     

    @steini84 also told me now that multiple people report they have issues doing it this way but I don't haven't got a single issue so far.

     

    I think since this is a plugin that extends Unraid in it's functions there is no right or even wrong way to do it.

    It's always up to the user where you set your mount points for ZFS and from my perspective if it works for you, then leave it as it is...

     

    Unfortunately, Docker on ZFS still gives me trouble (just tried again to confirm, see the following pictures and description).

    VMs on ZFS work, even with the libvirt.img residing on ZFS but as a security measure I still keep the libvirt.img in the standard folder to ensure it is not the .img which causes trouble.

     

    Setting Docker to a path on my ZFS pool the installation e.g. for Krusader fails/hangs at the shown point and the GUI does not respond anymore.

    I can only access the machine on the host terminal and even shutdown -r -n now does not work, so a hard-reset is necessary.

     

    I tried to install Krusader withContainer Path = /mnt/z-syspool/UnRaid/appdata/krusader/

    Spoiler

    grafik.thumb.png.901ef141792ba00f7f2e93ae5855c1b1.png

    My docker.cfg looks like this:

    Spoiler

    DOCKER_ENABLED="yes"
    DOCKER_IMAGE_FILE="/mnt/z-syspool/UnRaid/system/docker/docker.img"
    DOCKER_IMAGE_SIZE="20"
    DOCKER_APP_CONFIG_PATH="/mnt/z-syspool/UnRaid/appdata/"
    DOCKER_APP_UNRAID_PATH=""
    DOCKER_CUSTOM_NETWORKS=" "
    DOCKER_TIMEOUT="10"
    DOCKER_LOG_ROTATION="yes"
    DOCKER_LOG_SIZE="50m"
    DOCKER_LOG_FILES="1"
    DOCKER_AUTHORING_MODE="no"
    DOCKER_USER_NETWORKS="remove"

    The docker.log:

    Spoiler

    time="2022-03-21T12:17:07+01:00" level=warning msg="deprecated version : `1`, please switch to version `2`"

    The error.log (IP addresses and names have been "x"ed):

    By the way, the same error could be seen with IPv4 so IPv6 is not the reason.

    Spoiler

    2022/03/21 12:30:54 [error] 27760#27760: *8320 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 2001:x:x:x:x, server: , request: "GET /plugins/dynamix.local.master/include/LocalMaster.php HTTP/1.1", subrequest: "/auth-request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml"


    2022/03/21 12:30:54 [error] 27760#27760: *8320 auth request unexpected status: 504 while sending to client, client: 2001:x:x:x:x, server: , request: "GET /plugins/dynamix.local.master/include/LocalMaster.php HTTP/1.1", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user
    /my-Krusader.xml"


    2022/03/21 12:30:54 [error] 27760#27760: *8329 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 2001:x:x:x:x, server: , request: "GET /plugins/unassigned.devices.preclear/assets/sweetalert2.js?_=1647861614714 HTTP/1.1", subrequest: "/auth-request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml"


    2022/03/21 12:30:54 [error] 27760#27760: *8329 auth request unexpected status: 504 while sending to client, client: 2001:x:x:x:x, server: , request: "GET /plugins/unassigned.devices.preclear/assets/sweetalert2.js?_=1647861614714 HTTP/1.1", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml"


    2022/03/21 12:30:56 [error] 27760#27760: *8356 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 2001:x:x:x:x, server: , request: "GET /plugins/dynamix.my.servers/webComps/unraid.min.js?v=1647042912 HTTP/1.1", subrequest: "/auth-request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml"

     

    2022/03/21 12:30:56 [error] 27760#27760: *8356 auth request unexpected status: 504 while sending to client, client: 2001:x:x:x:x, server: , request: "GET /plugins/dynamix.my.servers/webComps/unraid.min.js?v=1647042912 HTTP/1.1", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/temp
    lates-user/my-Krusader.xml"


    2022/03/21 12:32:37 [error] 27760#27760: *7791 upstream timed out (110: Connection timed out) while reading upstream, client: 2001:x:x:x:x, server: , request: "POST /Apps/
    AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "ryzenbrain.xxx.xx", referrer: "http://ryzenbrain.xxx.xx/Apps/AddContainer?xmlTemplate=user:/boot/config/plugins/dockerMan/templates-user/my-Krusader.xml"

     

    File permissions also look ok on the regular share and on the ZFS share:

    Spoiler

    grafik.thumb.png.08ebcf53450b2a0353e677eed37e935d.png

     

    Acccording to the error.log, I'd say it is not a problem with the .img files themself but more of a problem somewhere in the way PHP is accessing these files?

     

    If there is something else you want me to try, just drop me a line.

     

    Regards

    DaKarli.

    ..

    grafik.png

  9. On 2/16/2022 at 9:31 PM, andber said:

    Von euch Profis wohnt nicht gerade jemand in der Nähe von Konstanz ..... Kaffee, Bier und Bounty wären euch sicher!! :)

     

    @andber München-Konstanz wär ja kein sooo großes Problem, wären da nicht aktuell die Spritkosten... und leider fährt meine Kiste weder mit Kaffee, noch Bier und Bounty funzt auch net...😋

     

    Aber vielleicht helfen dir folgende Hinweise.

    Du hast bei ZFS Pool ja aclinherit=passthrough gesetzt.

    Das genügt womöglich nicht, wenn du unter Windows mit AD eine Rechtevererbung (ACL) nutzen willst, während das "original" Dateisystem durch Samba "emuliert" werden soll.

     

    Mein heißer Tipp wäre daher, zusätzlich noch folgende Zeilen in der SMB Extras Konfig einzutragen:

    Alle gleichlautenden Optionen würde ich besser unter dem "global" Attribut zusammenfassen, dann gelten diese gleich für alle Shares.

    Für den Share selbst braucht's dann nur noch das path= Argument. Macht die Sache viel Übersichtlicher...

    ...
    #unassigned_devices_end
    
    [global]
    vfs objects = acl_xattr
    map acl inherit = yes
    inherit permissions = yes
    inherit acls = yes
    store dos attributes = yes
    ...
    das restliche Zeugs
    ...
    
    [spider]
    path=...

     

    Bin mal gespannt, ob das alleine schon die Magie bewirkt... (Sonst muss ich hier glatt mal mein AD aus der Mottenkiste kramen um das in Verbindung mit UnRaid zu testen...)

     

    Grüße und Servus

    DaKarli.

     

  10. 52 minutes ago, dlandon said:

    It's too early to have this discussion about the mount points.  When ZFS is implemented in Unraid, UD will probably be able to mount ZFS disks that are not in the array.  I expect that UD would mount legacy ZFS disks so the data could be copied into the array or a new Unraid ZFS pool.  The mount point for UD mounted ZFS disks would be /mnt/disks/.  That was recommended for now to avoid the FCP /mnt/ warning.

     

    This does not mean the initial installation of UD.  When your serever is booted, plugins are installed in alphabetic order.  UD has to complete it's installation and set up the protection on the /mnt/disks/ folder before anything can be mounted or UD detects that mount and insists on a reboot to clear the mount on /mnt/disks/ so it can install the protection.  In your situation the ZFS mounts are auto mounting to /mnt/disks/ before UD can apply it's protection mechanism and that's why you see the reboot message.

     

    For now mount your ZFS disks at /mnt/zfs/ and ignore the FCP warning.

    Thanks @dlandon for the explanation. Another puzzle completing the big picture.

     

    Just as I thought, it's too early yet... 😉


    The good thing with ZFS is the fact, that even if everything changes in the way how Unraid will implement it, a ZFS pool can simply be exported/imported with just two commands and so you are able to run your Z-Pool on ANY system that supports ZFS. This file system is rock-solid and really bullet-proof since I started to use it back in about 2007/8 on Solaris 10... 😎 

     

    ...now gonig to upgrade to Unraid 6.10 rc-4 to see what has changed under the hood... 😋

  11. On 11/1/2021 at 5:53 PM, aleks2204 said:

    Also ich habe jetzt versucht im /etc/fstab den ZFS Mount einzutragen.

    Habe ein bisschen gegoogelt und auch etwas gefunden, aber es treten weiterhin diese Warnung auf.

    Siehe Screenshot.

    Vielleicht hat da jemand noch Ahnung.

    Danke und LG Aleks.

    etc_fstab.png

     

    Hi Aleks,

     

    gib im Terminal mal

    zfs list

    ein, dann siehst du alle Mountpoints, die du beim Erstellen des ZFS -Pools und der ZFS Datasets vergeben hast.

    Ein (einfaches) Dataset erstellt man mit:

    zfs create -o mountpoint=/xxx/yyy poolname/dataset

    Wobei xxx und yyy zwar beliebig sein könnten, du aber besser für xxx den Name deines Z-Pools nimmst und für yyy den Namen des Datasets, damit es übersichtlich bleibt.

    Ein "normaler" Mountpoint wäre z.B. /mnt/MeinZPool/MeinDataset.

    Vermeide aber wenn möglich relativ allgemeine Namen wie "FILES" für Pool und Dataset bzw. Mountpoints, da diese manchmal zu Problemen führen kann, wenn diese Namen im System für etwas anderes reserviert sind (siehe z.B. https://docs.oracle.com/cd/E19253-01/819-5461/gbcpt/index.html).

    Und nutze als Namen für deinen ISO Share nicht den Namen "isos", denn dieser ist bereits von Unraid reserviert und führt zu Problemen, über die ich auch schon stolpern musste.

     

    Sind Deine Z-Pools/Datasets bereits gemounted, kannst du die Mountpoints nachträglich noch ändern - natürlich mit der Warnung, das alle Konfigurationen, die diesen Mountpoint nutzen, ebenfalls angepasst werden müssen, so wie z.B. deine smb.config!

     

    Damit umountnest du alle Dataset Mounts auf einen Schlag:

    zfs unmount -a

     und kannst mit diesem Befehl den Datasets einen neuen Mountpoint verpassen:

    zfs set mountpoint=/mnt/xxx/yyy	poolname/dataset	

    um zuletzt mit folgendem Befehl wieder alle ZFS Pools/Datasets ins System zurückzumounten:

    zfs mount -a

     

    ÜBRIGENS: ZFS stellt dem System die Mountpoints selbständig und automatisch zur Verfügung und die Mountpoints müssen NICHT in der /etc/fstab eingetragen werden!

    Siehe https://zfsonlinux.org/manpages/0.7.13/man8/zfs.8.html unter "Mount Points":

    "...ZFS automatically manages mounting and unmounting file systems without the need to edit the /etc/fstab...".

     

    Abgesehen davon klingt der Fehler, den du oben beschreibst, irgendwie nach Quota, daher stell dir die Frage, ob du irgendwo Quota Regeln eingestellt hast, die somit auch Quelle des Problems sein könnten. Das kann dann entweder beim Erstellen des ZFS selbst eingestellt worden sein oder auch irgendwo in deiner Unraid Konfiguration. Falls ja, überleg dir ob du Quotas wirklich benötigst und schalte sie ggfs. ab. Darüber hinaus kann ich bei diesem Fehler aber auch nicht weiterhelfen, falls meine anderen Tipps nicht helfen.

     

    Last not least solltest du die ZFS Shares besser nicht so nutzen wie es oben scheint:

    Quote

    "\\Microserver\files" --> ZFS Pool

    "\\MICROSERVER\isos" --> Array

     

    sondern lieber immer in hierarchischer Form:

    /zfspool
    /zfspool/dataset1
    /zfspool/dataset2
    /zfspool/iso-dateien
    etc...

     

    Und very last not least würde ich für jedes Unterverzeichnis, das du erstellen willst, ein eigenes Dataset unter ZFS erstellen.

    Also quasi nicht "nur" einen Z-Pool (ohne jegliche Datasets, was prinzipiell ginge) erstellen und dann über Windows alle weiteren Unterverzeichnisse (z.B. ISOs, System, VMs, Games, Music, etc. ) anlegen, sondern diese "Vor"-Arbeit in ZFS machen.

    Vorteil: Du kannst jedes dieser Datasets für die Art der Daten, die darin liegen sollen (z.B. viele kleine Dateien vs. viele große Dateien) speziell anpassen, was zusätzliche Performance bringt und zudem die Möglichkeit bietet, gezieltere ZFS Snapshots pro Dataset (und eben nicht nicht Snapshot "Alles") zu erstellen.

    Kleiner Nachteil: Den Ordner, den du als Dataset erstellst, kannst du unter Windows natürlich nicht mehr direkt löschen. Aber wer will das schon, wenn alles mal geordnet ist?

     

    Viel Spaß beim Umsetzen

    DaKarli

  12. On 3/18/2022 at 8:00 PM, dlandon said:

    I agree.  Using /mnt/ and having VMs or Docker Containers misconfigured can cause some serious issues.  UD sets write limits of 1MB on the /mnt/disks/, /mnt/remotes/, and /mnt/rootshare/ to keep misconfigured VMs and Docker Containers from filling the tmpfs and crashing Unraid.  Install UD and ignore it.  You don't have to use any of it's features.

     

    The only thing is the ZFS mounts have to happen after UD has installed.

    @dlandon@Squid@ich777

     

    I've put you three on copy because I think the question "which is the best mountpoint for zfs" is something which has to be discussed and needs a clear statement because of the hopefully soon to be realised plans to natively integrate zfs as an underlying filesystem though I also have a good understanding that maybe we may be in an too early stage for this discussion to happen...

     

    I am aware that I can mount my zfs whereever I want (technically speaking) and during my first steps with Unraid I realized that Unraid behaves differently from what I have seen with other linux systems. To be honest, though reading through a lot of posts in this forum I still don't completely understand how Unraid handles mounts, config files, its complete bootup and the fact that the system resides in RAM after boot, etc.

    I have a quite good knowledge of all the technology involved but until I have the big picture here, I am poking a little bit in the mist... ;-)

     

    My first attempt to mout zfs was under / but I realized I couldnt choose e.g. the VM path in the file picker if it resided there. My second approach, after disabling user shares in global share settings, was to mount my zfs under /mnt/disks/zfs (as suggested) which allowed me to use the file picker but led to problems with docker on my zfs share (I've read the warning but nevertheles tried it).

    Re-enabling user shares and putting docker back onto a regular Unraid array share solved this but now i got into trouble with the file picker as it only showed the paths provided by "user shares" (appdata, domains, system, isos). So the file picker was worthless for what I intended.

    Not being happy with such a long path for my zfs mount I went back to mount zfs under /mnt/zfs which led to UD warning me about that... Ok, I can ignore that if I want to but on the other hand, what is the impact?

     

    Now @dlandon said, UD sets write limits on the /mnt/disks path! So this absoultely means it would not be advised to put zfs in this path!

    To sum up, and as I am (for the moment) ok with not being able to make use of the file picker, I am gonig to leave my zfs mounted under /mnt/zfs and will click to ignore the warning in Fix Common Problems.

     

    What I don't get is what you mean by "...the ZFS mounts have to happen after UD has installed...".

    I installed UD, then I created my ZFS pools and datasets with the mountpoints at /mnt/zfs-pools/zfs-datdasets - and for the moment see no problems with that.

    Or is it something else you mean by that?

  13. 4 hours ago, ich777 said:

    Why don't you change the mount point to /mnt/zfs?

    This would be the easiest method and you don't get this warning...

     

    Something like this should get you covered:

    zfs set mountpoint=/mnt/zfs yourpoolname

    (I would recommend to stop the Array in the first place and change all directories to the right location and after that start the Array again)

     

    Unfortunately if you do so, the "Fix Common Problems" plugin (see spoiler picture) gives you this warning which, of course, you could gently ignore...

    But on the other hand, why would it throw this warning if it makes no sense?

    So I'd second the question of which would be the right path to mount zfs pools? Maybe someone of the UnRaid devs could have a look and answer... Thanks!

     

    Spoiler

    grafik.thumb.png.3f22461f0c0d399259466ec533e36fe3.png

     

  14. 21 hours ago, Squid said:

    Yes that's correct.  It was a design consideration to make everything a ton easier and also handle if the user manually deleted the scripts.  The script execution engine checks for the existence of the script before executing to handle this.

     

    Then it should show up on the list with the same settings as the one previously deleted.

     

    Ok, so if the script exec engine checks this before, like I said, it may be no problem.

    So it would be no problem either to manually delete old entries from the .json file if someone wants to go this extra step.

     

    Regarding your 2nd point, I just want to add that the risk is somewhere else. If you add a different script but with the same name, you may not be aware that this new script will be executed by the old line in the .json file right after you saved that file.

    But just as I already said, these may be rare circumstances and I just wanted point to possible risks with the handling being like it is.

    Cheers ! 😉

  15. Hi @Squid

     

    I just found out something weired and maybe of interest regarding the behaviour of the "User-Script" plugin.

     

    If I just regularly delete a script in the GUI, the script itself is being deleted in /boot/config/plugins/user.scripts but within the schedule.json file the scripts definitions from the deleted scripts are still there.

    (see picture in spoiler: the marked ones have been deleted before)

    Spoiler

    grafik.thumb.png.c962b09a16f72cae8e58e6203e9dd4f1.png

     

    In fact, as long as the script-file itself does not exist any longer, it would not be a problem.

     

    But 1.) every time this schedule.json file is loaded by the plugin, it loads old, already unnecessary stuff.

    And 2.), going one step further, imagine what could happen if you create a script with the same name or the same file-name or if you create a script manually and put it into this directory with the same file name?

     

    Of course, all this may be a problem just in rare circumstances but in my opinion the file and config handling of plugins in such important systems like a NAS should be as correct as possible to prevent from unintended errors as good as possible.

     

    At this point, nevertheless, a *BIG* thank you for this plugin. It helps a lot!

     

    Regards DaKarli. 🤗

  16. 37 minutes ago, SimonF said:

    NUT is running ok for me on RC3 as a Slave, but i haven't changed any files.

     

    Does you file list show as blank?

     

    image.png.f460202e7cad997fb8fe4527fc70fe84.png

     

    Hi SimonF,

    yes, as slave it would be no problem... But I need to keep UPS on this machine as it is the one switched on 24/7.

     

    And yes, the file list is just blank as in your screenshot. Any hints about this?

  17. @SimonF@dmacias

    Hey Dev's,

    thanks for your efforts to bring this important plugin to UnRaid and keep it running!

     

    Unfortunately for me it seems to be broken at least in one point on the latest Unraid Version: 6.10.0-rc3

    (Why this? I need it for Win 11 and TPM!).

     

    Problem: The NUT Config Editor has no access to the config files in /etc/nut. Maybe this is due to (missing) permissions within the jqueryFileTree script or due to generally not being allowed to open system folders - which makes sense with security in mind.

    The file bar just displays nothing.

    Additionally, manually configuting the config files in /etc/nut fails as they (seem to) get overwritten by the plugin when restarting the plugin or system.

    Last not least I tried to change the line in NUTEditor.page from "data-pickroot="/etc/nut" to "./nut" so it may takes the files in "/usr/local/emhttp/plugins/nut/nut" which seem to be the "master-files" for the mentioned overwriting but that won't work either.

    As a last measure I am going to edit these "master-files" to what I need and hope it will work that way.

     

    I am using a CyberPower VP700ELCD and this thingy needs a lot of tuning regarding the NUT-settings e.g. needs a different driver.parameter.pollfreq and driver.parameter.pollinterval as well as some override tweaks for battery.charge.low
    battery.runtime.low with ignorelb to run smooth and safe.

     

    I hope you can reproduce my error and provide the community with some updates to the plugin so that this important thing works again as expected.

    For a NAS one of the most important things is a working UPS and a working procedure to safely shutdown a system if power fails.

     

    Thanks and regards

    DaKarli

     

    Update: After editing the a.m. master-files the system does not work, in fact does nothing at all and only shows the status STOPPED....

×
×
  • Create New...