glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. Tnx, i noticed (had only checked a few and assumed the rest was also not there) Already added the rest from slackware.pkgs.org
  2. Nevermind. Found the right topic and i am not the only one unhappy. Going on the search (again) for the missing packages and dump them in extra. (note to self: why did i press upgrade ............)
  3. O crap, just updated to 6.11 and also hit the Nerpack wall. Relying on several tools in there . Where can i get these ? I remember in the past a similar situation and i had to manualy put these into a directory to have them loaded but long time ago. Quick tip please - bc (most important as used in lots of bash scripts) - screen - tmux - unrar - perl - iperf3
  4. After upgrade to 6.10.0.rc3 (from rc2) i get error message on my qemu arguments to set rotational rate to 1 on my virtual drives (make the mac osx think they are ssd's) These 3 entries for my 3 devices in my vm templates now give errors. When removed vm boots as normal . Did the syntax change ?? <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-3.rotation_rate=1'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-4.rotation_rate=1'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-5.rotation_rate=1'/> edit : rolled back to rc2 and all good again. Mainly because i also had a huuuuuge system lockup just now for the first time in more then a year (and within not even 1 hour on rc3) when rebooting this mac osx vm with amd gpu (maybe the amd reset bug plugin changed behavior also with rc3 , but not risking it . Safely back at stable rc2 .
  5. I have lots of foreign movies (old dvd converts etc) spread all over my library with vobsubs that do not have any online available srt's etc, so i have to keep the vobsub track. Is there a way to tell tdarr to skip converting the file if it detects a vobsub track ? I can not risk loosing any of these while converting my entire library using tdarr, and manually checking thousands of videos which would be affected is a nogo. Any suggestions ?
  6. Your welcome. PLay with it and i thing with all the info sofar you will figure it out (and learn more about zfs in the process). I also started that way, step by step . But zfs and the commands (only 2 commands to do everything . zpool <blabla> and zfs <blabla>) are super simple and very well documented on the internet. Have fun learning. I have to do some other stuff so wont be repying today anymore. Have a great evening.
  7. Then there is still something wrong with the mountpoint or you manualy mounted it using mount commands. You can see that if you check : zpool get altroot zfs There it should only show the root folder (zo without the name of the zfs) altroot "/mnt/disks" It will then always use the name of the zpool (in your case "zfs") to create the full mount path as /mnt/disks/zfs In there you will see the content of the pool. If it does not show that, export again and reimport using the commands given before. The import mounts it for you at the right place so no need to manualy mount it. I must admit i have had it in the past that it remembered the old mountpoint in the altroot value but a few export/imports solved it in the end i think Its best to already specify the right mountpoint during pool creation to prevent this , but it should be possible to adjust bu using export/import. If not and it gets messed up at every boot, you can add the 2 export/import commands to your go file (/boot/config/go) so it will do an export and import to the right mountpoint you want at boot. If you want you can create a dataset named finalfolder in the zpool, which will then be mounted under /mnt/disks/zfs/finalfolder. datasets are sub devices in zpools with their own characteristics and they inherit the characteristics of the main pool unless set otherwise. You can simply create one (after you got the mountpoint sorted out to be /mnt/disks/zfs) by using the command. zfs create zfs/finalfolder
  8. Yes indeed . Your ghost share is [Backup] Between brackets you define the name you want the share to be visible by in windows. So you could reuse that and just change the "path" variable underneath to the actual mountpoint of your zfs (eg /mnt/disks/zfs) The only thing extra i have (which you may or may not need) is write list = yourusername valid users = yourusername With yourusername to be replaced with your unraid share user you want it to have access to. Last thing you need to do is open up the permissions of the zfs mountpoint on unraid. chmod 777 /mnt/disks/zfs chown nobody:users /mnt/disks/zfs Then restart samba (or the whole array )
  9. check my previous screenshot for the content you can add to smb extra to share your zfs. Maybe the ghost share you see is also in there So in your case you could add something like this: [My-AMAZING-ZFS-SHARE] path = /mnt/disks/zfs comment = browseable = yes # Public writeable = yes read list = write list = yourusername valid users = yourusername vfs objects =
  10. Think about it, everything in unraid os is already dynamic as build up in memory. During boot zfs is installed every time from scratch . Then (if set as such) it will import all available pools . In the pool parameters the mountpoint is set (altroot) and used/created during import. With "zpool get all <poolname>" you will see all current parameters for the pool.
  11. Yes That is the nice thing with zfs. Its very portable.
  12. p.s. When rebooting set array temporarily to not autostart . That also helps as then you only have to deal with zfs and not with the array, untill you are ready with the zfs stuff
  13. Ok so when you reboot witout drives conected, you check with zpool status , should be no pool (if pool, run export command) . Then plug in drives. Wait untill recognised. (should not auto import the pool ). Then run the import command i gave you. and check again afterwars with zpool status.
  14. If zpool status shows no pools , you can use the import command i gave you directly and skip the export.
  15. what is the output of "zpool status" ?
  16. That can have several causes. For example the pool is beeing used/bussy. If you dont know how to fix that , you can do what you did before eg shutdown, disconnect the 2 zfs drives, start the system . Then , if you start the array as you mentioned you will have all you normal shares back and the zfs pool is not imported (as missing all drives). Run the export command just to be sure the pool is gone. Then you can plug in the drives , wait untill they are detected and just run the second command "zpool import -R /mnt/disks zfs" . You can also do this export/import "before" the array is started as zfs is already active immediately after boot. That way you can check / make sure the zfs is mounted correctly before you start the array.
  17. yes, after these 2 command you can type "df" and you should see your zfs pool mounted under the new mountpoint. The /mnt/user/zfs should then be empty (doublecheck !!) and you can then remove that directory/share. Then i would restart the array and check if all your other shares will come back now zfs is not blocking/accessing it anymore. So step by step.
  18. If your zfs pool is named "zfs" then you can do that via an export and import of the pool as you can only set the "altroot" parameter during creation. For example setting the mountpoint of the pool named "zfs" to "/mnt/disks/zfs" (so the "altroot" is /mnt/disks) you would use : zpool export zfs zpool import -R /mnt/disks zfs Any datasets (if any) created underneath will inherit this mountpoint.
  19. Saving data is always good, but you dont have to delete it. Just change the mountpoint to anywhere outside of the array. Then restart he array (or reboot) and likely you will be fine again with all your data in tact.
  20. Here is everything you need to know about the ZFS plugin on unraid.
  21. Yeah i get why you though that would be smart , but unfortunately in this case that is not the way and pretty dangerous. Check the main zfs thread where it is explained how the share your zfs datasets over smb. Basicaly you have to use smb-extras. p.s. here an example how i shared 2 of my ZFS pools so they are available on my Mac and Windows clients
  22. Dont mount your zfs under /mnt/user as that is the array. So depending on who mounts first you will only see the array or zfs stuff or other wierd effects from this comflicting setup Mount it for example under /mnt/disks/zfs instead.
  23. Maybe far off but the only thing i remember seeing was a video on linus tech tips when he was trying to build an all nvme storage array (i believe 24 or so) and it required some bios and or kernel adjustment as this was too fast . But completely unclear about the details as was a while ago. But should be easy to find. Otherwise no clue nut sounds scary
  24. Never had any issues with zfs on unraid since day one (while before that , btrfs was all pain and misery) and been running also 2.1 since a while now. All rock solid with multiple different pools (all ssd or nvme) running all my vm's and docker. But run docker in folders on zfs . Never had it in an img on zfs. I do have libvirt image on zfs but that holds nothing compared to a docker img. I guess you have just been unlucky as i remember your thread with all the issues you had before. Even recently moved all my docker folders back and forth between pools while swapping and rearranging ssd's , while beeing amazed with zfs's possibilities in this (combination of snapshots send / receive and data eviction from disks/vdevs when adding/removing disks/vdevs). All smooth sailing and not a single issue. I became such a zfs fanboy. Looooove it