glennv

Members
  • Content Count

    262
  • Joined

  • Last visited

Community Reputation

45 Good

About glennv

  • Rank
    Member

Converted

  • Gender
    Male
  • URL
    https://posttools.tachyon-consulting.com
  • Location
    amsterdam

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Cool. Its the test of all tests. If ZFS passes this with flying colours , you will be a new ZFS fanboy i would say I am a total ZFS fanboy and proud of it Keep us posted
  2. As Jortan mentioned, a non perfect ssd will slow this process down. On a healthy ssd of that size this only takes a few hours (or less) . Heck even spinning rust , which i did last week to replace 4 TB from a zfs mirror used for backups took less then half a day. So maybe time to invest in a few new ssd's Also running acvtive VM's etc on it will not help the speed. edit: when rereading i see you are talking normal drives and not ssd's. Sorry for that. Still pretty slow imho so same advice eg get a nice fresh smelling new drive when replacing bad drives. Dont replace bad with
  3. The common recomendation is not for zfs to not get confused but for the humans operating the system. If you have only a few drives its easy to not get confused , but if you have lots of drives it is very helpfull to stick with the recommendation.
  4. besides using full path for actual drives (so the new replacement drive sdi ) it also needs the pool name. so : zpool replace poolname origdrive newdrive. Origdrive can be a funny name as you see when the actual drive is gone I would advice to always adress drives instead by the /dev/disk/by-id/xxxxxx adress. Go to that directory and you will find your drives and correct id there. These unique id's will never change while these /dev/sd? identifiers can change after a boot or when adding removing drives. Prevents accidentaly wiping the wrong drive . You can check the status of the replacement
  5. If i want to see exactly what is using up space , its best to do that on unraid itself from command level. use the following will show you a drilldown of the directory sizes at highest directory level. Go to deeper directory levels when you find what is reported as eating up the space. This will also include hidden files etc. du -h -d1 /mnt/cache to list files and sizes including hidden files in a directory ls -lah /mnt/cache/etc/etc
  6. Happy you got it working and good info for new plex installers indeed.
  7. Interesting thought indeed . I migrated to zfs some time ago and did not start from fresh so that may be true. So you may try building on array and then moving it to zfs . Keep us posted.
  8. I run every docker from zfs filesystems. Just make sure acces mode of the zfs paths you add to plex is set to "read/write slave" You can reach that setting if you click on the edit button next to the path you define in the docker settings. For any non array filesystem you set it like that.
  9. @avlec, I see isssues with linking in the logs. As you moved it from the cache (direct filesystem acces) to /mnt/user (fuse filesystem), maybe the function that it wants to use for linking is not working. I suggest moving it back to /mnt/cache or if you prefer directly on a single array drive using /mnt/diskX and see if it solves the issue to confirm its related to the undelying filesystem. I also use postgresql dockers on unraid for my davinci resolve database and now running it on zfs and before directly on /mnt/cache . Never tried it on the array filesystem /mnt/user
  10. Yeah i am sure hacking is involved Hence my question. Already using for years flashed intel 10G and 1Gb cards and hacked SmallTree drivers so they recognise these as smalltree cards to run these in OSX. But this is a new mode with this SR-IOV. My guess is based on the test and how i use to hack it , is the as i mentioned different device id that is presented to the OS . Not sure what apple sells nowadays themselves on 10G, but there are a few companies (like smalltree, sanlink , etc) that have 10G (and higher) cards and their own drivers . But SR-IOV will be new f
  11. Got SR-IOV working nicely with my X540 Dual 10G card. Works great in Linux machines, but when i try and use a VF in OSX where i have to use the smalltree driver, it does not work as the VF has different device id (1515, instead of 1528 for main device) as can be read but not changed from in my case /sys/bus/pci/devices/0000:82:00.0/sriov_vf_device. So the driver does not even recognise the VF. Anyone got something working on OSX and wants to share the process ?
  12. I dont use krusader, but i installed the binhex krusader docker for a quick test and i just define the path to a newly created test zfs dataset under the Host Path variable. Then click the edit button next to it and set the access mode to read/write -slave. Then when you start the docker, you will find the content under the /media folder in the docker. All working as normal. The trick may be the access mode. Forgot exactly why, but i remember i need it for anything outside the array.
  13. Nope cant say i have that. Seems all fluid. Did a quick performance test with Davinci Resolve and the passed thru 5700XT and seems exactly the same or even slightly faster than my normal production Resolve Render VM on Catalina. Have not spend much time with it yet. Just for base testing my own code against he new OS , but as little seems to have changed compared to BigSur all is fine there. I remember from my old Windows 10 VM's when i used them for gaming i did have these stutters , but typicaly where related to passed thru usb and or network polling latency issues. On OSX i
  14. Solved the mysterous VNC bootloop issue. 1. Log in with a GPU passed thru (i used RDP as my GPU showed black at the time) 2. Set autologin in user options (seems to be an issue with login process looping in some cases) 3. Reboot without GPU passed thru and now VNC works fine.
  15. Ah , just when i wanted to call it a day, it started working Issue was related to me using/trying to boot from an earlier created Monterey update partition, before i messed with opencore/nvram clearing etc etc when booting in the BigSur partition. So likely not valid anymore (nvram, seal , stuff like that) . I just booted in BigSur again and re-ran the updater, which recreated that partition and voila. Update is running fine now and no boot issues. The real issue in "my" opencore config was as i mentioned the max value limit (minkernel/maxkernel) on the kernel patches. The rest was no