Jump to content

glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Posts posted by glennv

  1. O crap, just updated to 6.11 and also hit the Nerpack wall. Relying on several tools in there .

    Where can i get these ? I remember in the past a similar situation and i had to manualy put these into a directory to have them loaded but long time ago. Quick tip please

    - bc (most important as used in lots of bash scripts)

    - screen

    - tmux

    - unrar

    - perl

    - iperf3

     

  2. I have lots of foreign movies (old dvd converts etc) spread all over my library with vobsubs that do not have any online available srt's etc, so i have to keep the vobsub track.

    Is there a way to tell tdarr to skip converting the file if it detects a vobsub track ?

    I can not risk loosing any of these while converting my entire library using tdarr, and manually checking thousands of videos which would be affected is a nogo.

    Any suggestions ?

     

  3. Your welcome. PLay with it and i thing with all the info sofar you will figure it out (and learn more about zfs in the process).

    I also started that way, step by step . But zfs and the commands (only 2 commands to do everything . zpool <blabla>  and zfs <blabla>) are super simple and very well documented on the internet. 

    Have fun learning. I have to do some other stuff so wont be repying today anymore. Have a great evening.

  4. 17 minutes ago, EricM said:

     

    and also my mount point is a little bit weird. i mountet my zpool to /mnt/disks/zfs but now its combined with the old mount point. its /mnt/disks/zfs/mnt/user/zfs/finalfolder. But i want it like /mnt/disks/zfs/finalfolder. But i cant copy or paste or do anything there

     

    Then there is still something wrong with the mountpoint or you manualy mounted it using mount commands. You can see that if you check :

    zpool get altroot zfs

    There it should only show the root folder (zo without the name of the zfs) altroot "/mnt/disks"

    It will then always use the name of the zpool (in your case "zfs") to create the full mount path as /mnt/disks/zfs

    In there you will see the content of the pool. 

    If it does not show that, export again and reimport using the commands given before. The import mounts it for you at the right place so no need to manualy mount it.

    I must admit i have had it in the past that it remembered the old mountpoint in the altroot value but a few export/imports solved it in the end i think

    Its best to already specify the right mountpoint during pool creation to prevent this , but it should be possible to adjust bu using export/import.

    If not and it gets messed up at every boot, you can add the 2 export/import commands to your go file (/boot/config/go) so it will do an export and import to the right mountpoint you want  at boot.

     

    If you want you can create a dataset named finalfolder in the zpool, which will then be mounted under /mnt/disks/zfs/finalfolder.

    datasets are sub devices in zpools with their own characteristics and they inherit the characteristics of the main pool unless set otherwise.

    You can simply create one (after you got the mountpoint sorted out to be /mnt/disks/zfs) by using the command.

    zfs create zfs/finalfolder

     

     

     

  5. 7 minutes ago, EricM said:

     

     

    In my case there is this:

    #unassigned_devices_start
    #Unassigned devices share includes
       include = /tmp/unassigned.devices/smb-settings.conf
    #unassigned_devices_end
    [Backup]
    path = /mnt/zfs
    public = yes
    export = yes
    browseable = yes
    writeable = yes
    create mask = 0777
    directory mask = 0777
    vfs objects =

     what do i need to write exactly? And the ghost share is called Backup, is this this  [Backup]?

    Yes indeed . Your ghost share is [Backup]

    Between brackets you define the name you want the share to be visible by in windows.

    So you could reuse that and just change the "path" variable underneath to the actual mountpoint of your zfs (eg /mnt/disks/zfs)

    The only thing extra i have (which you may or may not need) is 

    write list = yourusername

    valid users = yourusername

    With yourusername to be replaced with your unraid share user you want it to have access to.

    Last thing you need to do is open up the permissions of the zfs mountpoint on unraid.

    chmod 777 /mnt/disks/zfs
    chown nobody:users /mnt/disks/zfs

     

    Then restart samba (or the whole array )

  6. check my previous screenshot for the content you can add to smb extra to share your zfs. Maybe the ghost share you see is also in there ;-)

    image.png.00093c0cea2e90b2d82c1e0ce8db1467.png

     

    So in your case you could add something like this:

    [My-AMAZING-ZFS-SHARE]
          path = /mnt/disks/zfs
          comment =
         browseable = yes
          # Public
          writeable = yes
          read list = 
          write list = yourusername
          valid users = yourusername
          vfs objects =  

     

  7. 3 minutes ago, EricM said:

    ah ok, i thought its a stable mountpoint and you have to manually export it, that sounds really cool. so i will try this now

     

    Think about it, everything in unraid os is already dynamic as build up in memory. During boot zfs is installed every time from scratch . Then (if set as such) it will import all available pools . In the pool parameters the mountpoint is set (altroot) and used/created during import.

     

    With "zpool get all <poolname>" you will see all current parameters for the pool.

  8. Just now, EricM said:

    ok, so a zpool exports automatically when the server is shutdown and imports when started? so basically if you shutdown the server u could just unplugg those two drives, connect them for example to a pc and start the zpool there witg the import?

    Yes ;-) 

    That is the nice thing with zfs. Its very portable.

  9. Ok so when you reboot witout drives conected, you check with zpool status , should be no pool (if pool, run export command) .

    Then plug in drives. Wait untill recognised. (should not auto import the pool ). Then run the import command i gave you. and check again afterwars with zpool status.

  10. 8 minutes ago, EricM said:

    i tried to export the zpool, but i get this error: cannot unmount '/mnt/user/ZFS': unmount failed

     

    That can have several causes. For example the pool is beeing used/bussy. If you dont know how to fix that , you can do what you did before eg shutdown, disconnect  the 2 zfs drives, start the system  . Then , if you start the array as you mentioned you will have all you normal shares back and the zfs pool is not imported (as missing all drives).

     

    Run the export command just to be sure the pool is gone.

    Then you can plug in the drives , wait untill they are detected and just run the second command "zpool import -R /mnt/disks zfs" .

     

    You can also do this export/import "before" the array is started as zfs is already active immediately after boot. That way you can check / make sure the zfs is mounted correctly before you start the array.

     

  11. Just now, EricM said:

    so just those 2 commands? and afterwards i can delete the zfs share i made and search a solution to export the new mountpoint via smb

     

     

     

    yes, after these 2 command you can type "df" and you should see your zfs pool mounted under the new mountpoint. The /mnt/user/zfs should then be empty (doublecheck !!) and you can then remove that directory/share. 

    Then i would restart the array and check if all your other shares will come back now zfs is not blocking/accessing it anymore.

     

    So step by step.

  12. Just now, EricM said:

    how do i change the mount point? sorry i dont know any commands, i just did it like spaceinvaderone, except the mount point

    If your zfs pool is named "zfs" then you can do that via an export and import of the pool as you can only set the "altroot" parameter during creation.

    For example setting the mountpoint of the pool named "zfs" to "/mnt/disks/zfs" (so the "altroot"  is /mnt/disks) you would use :

    zpool export zfs
    zpool import -R /mnt/disks zfs

     

    Any datasets (if any)  created underneath will inherit this mountpoint. 

  13. Saving data is always good, but you dont have to delete it. Just change the mountpoint to anywhere outside of the array. Then restart he array (or reboot) and likely you will be fine again with all your data in tact.

  14. 19 minutes ago, EricM said:

    i did it like this before but then i had no idea how to use the zfs pool. because i want my zfs pool to show up on windows explorer, as the other unraid shares. so i came up with the idea to create a share for zfs and then mount the pool in this share, so everthing is visible on windows.

    Yeah i get why you though that would be smart , but unfortunately in this case that is not the way and pretty dangerous. Check the main zfs thread where it is explained how the share your zfs datasets over smb. Basicaly you have to use smb-extras. 

     

    p.s. here an example how i shared 2 of my ZFS pools so they are available on my Mac and Windows clients

    image.png.2ba0f0409c92ef5ec1945d2d2a0a2d5b.png

  15. Dont mount your zfs under /mnt/user as that is the array. So depending on who mounts first you will only see the array or zfs stuff or other wierd effects from this comflicting setup
    Mount it for example under /mnt/disks/zfs instead.

  16. Maybe far off but the only thing i remember seeing was a video on linus tech tips when he was trying to build an all nvme storage array (i believe 24 or so) and it required some bios and or kernel adjustment as this was too fast . But completely unclear about the details as was a while ago. But should be easy to find.
    Otherwise no clue nut sounds scary

  17. Never had any issues with zfs on unraid since day one (while before that , btrfs was all pain and misery) and been running also 2.1 since a while now. All rock solid with multiple different pools (all ssd or nvme) running all my vm's and docker. But run docker in folders on zfs . Never had it in an img on zfs. I do have libvirt image on zfs but that holds nothing compared to a docker img.
    I guess you have just been unlucky as i remember your thread with all the issues you had before.

    Even recently moved all my docker folders back and forth between pools while swapping and rearranging ssd's , while beeing amazed with zfs's possibilities in this (combination of snapshots send / receive and data eviction from disks/vdevs when adding/removing disks/vdevs). All smooth sailing and not a single issue.

    I became such a zfs fanboy. Looooove it

    • Like 4
×
×
  • Create New...