glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. Seems got updated a few days ago and now works on rc1. Thanks !!!
  2. are you using this literaly or did you replace the file with your actual unraid servername as part of the name ? MYSERVER_unraid_bundle.pem Its also case sensitive btw Also test out the command to check your ssl cert . eg in my case (my hostname is TACH-UNRAID ) > hostname TACH-UNRAID > openssl x509 -noout -subject -nameopt multiline -in /boot/config/ssl/certs/TACH-UNRAID_unraid_bundle.pem subject= commonName = *.tachyon-consulting.com and under management acces i have configured my domain: The rc.nginx script compares the results of the above ssl command with the Local TLD and if not equal it removes the file and replaces it with a regenerated one. The patch (extra line) removes the * from 1st command so they now match. Maybe you have not filled in the Local TLD field or your SSL cert is not correct. So check both.
  3. Either wait for RC2 or add some code to your /boot/config/go file to do it on the fly. Without resorting to some code kunfu, the easiest would be : 1. edit the /etc/rc.d/rc.nginx to make it work (make sure nginx can restart properly and your code did not break it !!!) 2. make a copy of it to for example /boot/config/rc.nginx.tempfix 3. add a line to the go file to copy /boot/config/rc.nginx.tempfix back to /etc/rc.d/rc.nginx - Make sure you can always access your server via ssh (in case anything breaks the nginx gui) - Remember to remove the extra line in the go file "before" "any" unraid upgrade !!!!
  4. For me no. I am using a 5700XT. To be honest i have also not heared yet of reset bug in the 6000 series, but symptomps where similar for OP so never hurts to install and try it. So either the plugin or something else he did solved his issues aparently
  5. Yeah as @ich777 mentioned, i have a pretty stubborn server that untill his patch never wanted to work properly with any previous amd reset bug patch or other tricks. Since using his patch/plugin i have zero problems and can restart , kill , do whatever i want with the VM without any issues with the AMD card. So extremely highly recommended and i am super thankfull for his amazing job. Nothing special to do other then just install the plugin.
  6. This morning it seems to work fine again , at least for me. Thanks
  7. Ueah same here. Can not use my tapatalk on my ipad anymore for my daily unraid forum dose
  8. There is a bug in /etc/rc.d/rc.nginx in handling wildcard certificates. I reported it to@ljm42 and will be fixed in rc2. For now: around line 354 you see the code : SUBJECT=$(openssl x509 -noout -subject -nameopt multiline -in $SSL/ce.... Right after that, add this: SUBJECT=${SUBJECT/\*/$LANNAME} # support wildcard certs put your cert in place and restart nginx with : /etc/rc.d/rc.nginx restart
  9. You might need this trick. Typicaly for non server boards.
  10. Waiting eagerly for the announced blog post with instructions for testers to play with smb multi chan to finaly get all the juice from my 10G . Hoping its not a windows only thing ...... ( osx, linux only user)
  11. Nice. Passed the test with flying colors i would say.
  12. Cool. Its the test of all tests. If ZFS passes this with flying colours , you will be a new ZFS fanboy i would say I am a total ZFS fanboy and proud of it Keep us posted
  13. As Jortan mentioned, a non perfect ssd will slow this process down. On a healthy ssd of that size this only takes a few hours (or less) . Heck even spinning rust , which i did last week to replace 4 TB from a zfs mirror used for backups took less then half a day. So maybe time to invest in a few new ssd's Also running acvtive VM's etc on it will not help the speed. edit: when rereading i see you are talking normal drives and not ssd's. Sorry for that. Still pretty slow imho so same advice eg get a nice fresh smelling new drive when replacing bad drives. Dont replace bad with questionable unless you like to live on the edge Running VM's while resilvering on normal drives is about as worst case as you can get so ut be patient. Should finish in a week or so
  14. The common recomendation is not for zfs to not get confused but for the humans operating the system. If you have only a few drives its easy to not get confused , but if you have lots of drives it is very helpfull to stick with the recommendation.
  15. besides using full path for actual drives (so the new replacement drive sdi ) it also needs the pool name. so : zpool replace poolname origdrive newdrive. Origdrive can be a funny name as you see when the actual drive is gone I would advice to always adress drives instead by the /dev/disk/by-id/xxxxxx adress. Go to that directory and you will find your drives and correct id there. These unique id's will never change while these /dev/sd? identifiers can change after a boot or when adding removing drives. Prevents accidentaly wiping the wrong drive . You can check the status of the replacement with zpool status. Will take a while obviously...
  16. If i want to see exactly what is using up space , its best to do that on unraid itself from command level. use the following will show you a drilldown of the directory sizes at highest directory level. Go to deeper directory levels when you find what is reported as eating up the space. This will also include hidden files etc. du -h -d1 /mnt/cache to list files and sizes including hidden files in a directory ls -lah /mnt/cache/etc/etc
  17. Happy you got it working and good info for new plex installers indeed.
  18. Interesting thought indeed . I migrated to zfs some time ago and did not start from fresh so that may be true. So you may try building on array and then moving it to zfs . Keep us posted.
  19. I run every docker from zfs filesystems. Just make sure acces mode of the zfs paths you add to plex is set to "read/write slave" You can reach that setting if you click on the edit button next to the path you define in the docker settings. For any non array filesystem you set it like that.
  20. @avlec, I see isssues with linking in the logs. As you moved it from the cache (direct filesystem acces) to /mnt/user (fuse filesystem), maybe the function that it wants to use for linking is not working. I suggest moving it back to /mnt/cache or if you prefer directly on a single array drive using /mnt/diskX and see if it solves the issue to confirm its related to the undelying filesystem. I also use postgresql dockers on unraid for my davinci resolve database and now running it on zfs and before directly on /mnt/cache . Never tried it on the array filesystem /mnt/user
  21. Yeah i am sure hacking is involved Hence my question. Already using for years flashed intel 10G and 1Gb cards and hacked SmallTree drivers so they recognise these as smalltree cards to run these in OSX. But this is a new mode with this SR-IOV. My guess is based on the test and how i use to hack it , is the as i mentioned different device id that is presented to the OS . Not sure what apple sells nowadays themselves on 10G, but there are a few companies (like smalltree, sanlink , etc) that have 10G (and higher) cards and their own drivers . But SR-IOV will be new for these as well . Its a new things we are playing with here with a very small user base of enthousiasts alike
  22. Got SR-IOV working nicely with my X540 Dual 10G card. Works great in Linux machines, but when i try and use a VF in OSX where i have to use the smalltree driver, it does not work as the VF has different device id (1515, instead of 1528 for main device) as can be read but not changed from in my case /sys/bus/pci/devices/0000:82:00.0/sriov_vf_device. So the driver does not even recognise the VF. Anyone got something working on OSX and wants to share the process ?
  23. I dont use krusader, but i installed the binhex krusader docker for a quick test and i just define the path to a newly created test zfs dataset under the Host Path variable. Then click the edit button next to it and set the access mode to read/write -slave. Then when you start the docker, you will find the content under the /media folder in the docker. All working as normal. The trick may be the access mode. Forgot exactly why, but i remember i need it for anything outside the array.
  24. Nope cant say i have that. Seems all fluid. Did a quick performance test with Davinci Resolve and the passed thru 5700XT and seems exactly the same or even slightly faster than my normal production Resolve Render VM on Catalina. Have not spend much time with it yet. Just for base testing my own code against he new OS , but as little seems to have changed compared to BigSur all is fine there. I remember from my old Windows 10 VM's when i used them for gaming i did have these stutters , but typicaly where related to passed thru usb and or network polling latency issues. On OSX i have not seen this yet. But then again currently i have no USB card yet passed thru to this test VM. If i notice anything in the coming days i will let you know. Probably best to wait for the next beta anyway. Also i run Intel and you Ryzen so check for Ryzen related issues. I do remember seeing some reddit posts on Ryzen patches.