glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • URL
    https://posttools.tachyon-consulting.com
  • Location
    amsterdam

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

glennv's Achievements

Contributor

Contributor (5/14)

56

Reputation

  1. Tnx, i noticed (had only checked a few and assumed the rest was also not there) Already added the rest from slackware.pkgs.org
  2. Nevermind. Found the right topic and i am not the only one unhappy. Going on the search (again) for the missing packages and dump them in extra. (note to self: why did i press upgrade ............)
  3. O crap, just updated to 6.11 and also hit the Nerpack wall. Relying on several tools in there . Where can i get these ? I remember in the past a similar situation and i had to manualy put these into a directory to have them loaded but long time ago. Quick tip please - bc (most important as used in lots of bash scripts) - screen - tmux - unrar - perl - iperf3
  4. After upgrade to 6.10.0.rc3 (from rc2) i get error message on my qemu arguments to set rotational rate to 1 on my virtual drives (make the mac osx think they are ssd's) These 3 entries for my 3 devices in my vm templates now give errors. When removed vm boots as normal . Did the syntax change ?? <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-3.rotation_rate=1'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-4.rotation_rate=1'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-5.rotation_rate=1'/> edit : rolled back to rc2 and all good again. Mainly because i also had a huuuuuge system lockup just now for the first time in more then a year (and within not even 1 hour on rc3) when rebooting this mac osx vm with amd gpu (maybe the amd reset bug plugin changed behavior also with rc3 , but not risking it . Safely back at stable rc2 .
  5. I have lots of foreign movies (old dvd converts etc) spread all over my library with vobsubs that do not have any online available srt's etc, so i have to keep the vobsub track. Is there a way to tell tdarr to skip converting the file if it detects a vobsub track ? I can not risk loosing any of these while converting my entire library using tdarr, and manually checking thousands of videos which would be affected is a nogo. Any suggestions ?
  6. Your welcome. PLay with it and i thing with all the info sofar you will figure it out (and learn more about zfs in the process). I also started that way, step by step . But zfs and the commands (only 2 commands to do everything . zpool <blabla> and zfs <blabla>) are super simple and very well documented on the internet. Have fun learning. I have to do some other stuff so wont be repying today anymore. Have a great evening.
  7. Then there is still something wrong with the mountpoint or you manualy mounted it using mount commands. You can see that if you check : zpool get altroot zfs There it should only show the root folder (zo without the name of the zfs) altroot "/mnt/disks" It will then always use the name of the zpool (in your case "zfs") to create the full mount path as /mnt/disks/zfs In there you will see the content of the pool. If it does not show that, export again and reimport using the commands given before. The import mounts it for you at the right place so no need to manualy mount it. I must admit i have had it in the past that it remembered the old mountpoint in the altroot value but a few export/imports solved it in the end i think Its best to already specify the right mountpoint during pool creation to prevent this , but it should be possible to adjust bu using export/import. If not and it gets messed up at every boot, you can add the 2 export/import commands to your go file (/boot/config/go) so it will do an export and import to the right mountpoint you want at boot. If you want you can create a dataset named finalfolder in the zpool, which will then be mounted under /mnt/disks/zfs/finalfolder. datasets are sub devices in zpools with their own characteristics and they inherit the characteristics of the main pool unless set otherwise. You can simply create one (after you got the mountpoint sorted out to be /mnt/disks/zfs) by using the command. zfs create zfs/finalfolder
  8. Yes indeed . Your ghost share is [Backup] Between brackets you define the name you want the share to be visible by in windows. So you could reuse that and just change the "path" variable underneath to the actual mountpoint of your zfs (eg /mnt/disks/zfs) The only thing extra i have (which you may or may not need) is write list = yourusername valid users = yourusername With yourusername to be replaced with your unraid share user you want it to have access to. Last thing you need to do is open up the permissions of the zfs mountpoint on unraid. chmod 777 /mnt/disks/zfs chown nobody:users /mnt/disks/zfs Then restart samba (or the whole array )
  9. check my previous screenshot for the content you can add to smb extra to share your zfs. Maybe the ghost share you see is also in there So in your case you could add something like this: [My-AMAZING-ZFS-SHARE] path = /mnt/disks/zfs comment = browseable = yes # Public writeable = yes read list = write list = yourusername valid users = yourusername vfs objects =
  10. Think about it, everything in unraid os is already dynamic as build up in memory. During boot zfs is installed every time from scratch . Then (if set as such) it will import all available pools . In the pool parameters the mountpoint is set (altroot) and used/created during import. With "zpool get all <poolname>" you will see all current parameters for the pool.
  11. Yes That is the nice thing with zfs. Its very portable.
  12. p.s. When rebooting set array temporarily to not autostart . That also helps as then you only have to deal with zfs and not with the array, untill you are ready with the zfs stuff
  13. Ok so when you reboot witout drives conected, you check with zpool status , should be no pool (if pool, run export command) . Then plug in drives. Wait untill recognised. (should not auto import the pool ). Then run the import command i gave you. and check again afterwars with zpool status.
  14. If zpool status shows no pools , you can use the import command i gave you directly and skip the export.