EMKO

Members
  • Posts

    186
  • Joined

  • Last visited

Everything posted by EMKO

  1. Either way I am going to stick with nvidia cards. The one other HUGE advantage I am seeing (maybe just specific to me) is that I can force shutdown a VM with an nvidia card and start it up again without issue. With the AMD cards this always caused a kernel panic in the VM and the only way to fix was to restart the server. All 3 of my HD6450's (3 different vendors) exhibited this behavior. Every time i stop my openelec vm i i can not get it to work again unless i reboot the Unraid machine i am on a nvidia 430 how can i tell if the VM has a kernel panic?
  2. really its a hardware problem? ahh well this is useless then, still does not make sense how it can work once but not again
  3. I have to use 2 GPUs one is for Unraid geforce 210 and another i use for openelec geforce 430 can't get Unraid to GPU pass through a single card without on board graphics that's why i need 2. Does not matter if its auto started to manually started once it starts i can not get it to GPU pass through again if i stop the VM Hopefully this and the 1 GPU problem get fixed other wise its not looking good, in the mean time is there anything i can do to fix this?
  4. I can't get Openelec to GPU passthrought more then once before having to reboot my Unraid machine is there a way to fix this? I can still SSH into the openelec VM i just cant get the GPU working once again. This would not be a problem but sometimes Openelec crashes/freezes up having to reboot the whole Unraid is not something i want to do.
  5. i couldn't figure out testdisk to fix it, now i am running PhotoRec its recovering lots of files but no directories so i will have to search and find all the correct files
  6. I added a new SSD to my array to be a 2nd faster Cache drive when i booted up i added a 2nd slot to my Cache list added the new SSD and started up the Array, this is when Unraid wanted me to format the drive i was stupid and didn't read what drive it wanted to format and it formatted my 1st slot Cache Is there any way to recover the data?
  7. sorry stupid question what is a virt iso and where do i get one for pfsense? i just installed a vm on unraid to see and its working just need to get the NIC
  8. Thanks guys, hopefully it all works out
  9. Would this fix the problem with Nvidia gpu where passthrought wont work if no on board graphics?
  10. ok thanks i ordered 2 port nic, just never made a VM manually on Unraid before i hope its not to hard.
  11. ahhh i have a Nvidia Card with no on board graphics couldn't figure out why the pass through was no working any updates on this?
  12. I am thinking it would be nice to just use one machine to do what two are doing right now, would pfsense be able to manage Unraids connection as well? For example if i have dockers that download/use the internet on Unraid server and a VM of pfsense would pfsense be able to do say do QOS on Unraid? or just the other lan connections? Would i need to have 3 Ethernet ports? have 2 going to a switch so i can still access Unraid before pfsense goes up? and one for WAN? so before Pfsense goes up Unraid would have lan connection but no internet access untill PFsense is up? or this it be done with 2 ports? is there a guide for doing something like this?
  13. hugbug the guy who created nzbget made this for me works great save it as .sh if anyone needs something like that. #!/bin/sh ####################################### ### NZBGET POST-PROCESSING SCRIPT ### # Change default subtitles to 0. # # The script changes default subtitles to 0 on WEBDL tv shows. # # The long description can have multiple paragraphs when needed. Like # this one. ### NZBGET POST-PROCESSING SCRIPT ### ####################################### POSTPROCESS_SUCCESS=93 POSTPROCESS_ERROR=94 echo "Processing..." cd $NZBPP_DIRECTORY find . -name "*.mkv" -exec mkvpropedit {} --edit track:s1 --set flag-default=0 \; exit $POSTPROCESS_SUCCESS
  14. I don't need subtitles to be default ON on my tv shows as they are all in English and for some reason WEBDL all have subtitles that default to ON, i found a way to change it to off but i want to make a post processing script out of it. find . -name "*.mkv" -exec mkvpropedit {} --edit track:s1 --set flag-default=0 \; how can i make something like that into a script that will work for nzbget? i think it should be mkvpropedit filename.mkv --edit track:s1 --set flag-default=0 now how do i get nzbget to do this?
  15. very buggy, keyboard does not work,some tabs are small with ... instead of names and after using it for 5min i got a black screen not matter what i do i cant get the program to open again. Stopping and restarting the docker results again black screen after the xrdp login dialog
  16. that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else. so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move. seems like another reason to dislike sonarr to me, lol. its the best at what it does nothing else out there
  17. you probably don't notice, it has nothing to do with docker and it was already explain why the read/write check and delete happens.
  18. Agreed. Probably belongs somewhere in the unRAID wiki about volume mappings with Docker. The moving of files within the cache is primarily the result of containers, agreed? no, this is a problem with the OS. I had this even on version 5 you can test it in terminal with the mv command. Its good we now know that this is something that they will be able to fix in a future Unraid version.
  19. We have some big plans for user shares in future release (post 6.0) but they are premature to discuss here. One of those plans involves a change that should solve this issue once and for all. For now, this issue isn't causing problems, just a bit of inefficiency, which isn't to say it shouldn't be fixed, but does not make it a requirement for 6.0. sounds good, didn't mean to offend you guys just thought it was something Unraid was doing not the OS. thanks
  20. that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else. so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move.
  21. Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points). The base OS in the container has no clue that they're on the same drive because of the two different mount points. Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name. It will also take forever to do If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems? If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename. That way the base os will realize they are all on the same drive and the move will be instantaneous. Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place no this is a issue with Unraid you can test this in console by using mv for example use mv to move a file around on the cache drive this will be instant then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write I'm going to have to disagree with you on that one. You're using something like: mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile You're referencing two different mountpoints in the mv command (cache and user). Its not unRaid's fault. Its how all the OS's out there work. oh i didn't know that is this possible? for example /mnt/cache/share inside of that it would show the files from /mnt/user/share? this way the app will see the files and move them to /mnt/cache without reading/writing. Would this mess up the mover? as it would think all the files have to be moved?
  22. Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points). The base OS in the container has no clue that they're on the same drive because of the two different mount points. Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name. It will also take forever to do If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems? If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename. That way the base os will realize they are all on the same drive and the move will be instantaneous. Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place no this is a issue with Unraid you can test this in console by using mv for example use mv to move a file around on the cache drive this will be instant then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write I'm going to have to disagree with you on that one. You're using something like: mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile You're referencing two different mountpoints in the mv command (cache and user). Its not unRaid's fault. Its how all the OS's out there work. oh i didn't know that
  23. Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points). The base OS in the container has no clue that they're on the same drive because of the two different mount points. Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name. It will also take forever to do If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems? If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename. That way the base os will realize they are all on the same drive and the move will be instantaneous. Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place no this is a issue with Unraid you can test this in console by using mv for example use mv to move a file around on the cache drive this will be instant then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write
  24. Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points). The base OS in the container has no clue that they're on the same drive because of the two different mount points. Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name. It will also take forever to do If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems? i could if Sonarr would let me but it only lets you choose the destination folder so it will always move the file to the cached user share as that's where the app looks for and moves the files to. for example even if i tell nzbget to put the file inside of the /mn/cache/share the other app is looking for the files at /mnt/share so it will try to move it to /mnt/share so this causes the cache drive to read and write back the file to the same location. is there a way to make cache folder show whats also in the user folder? maybe this will fix my problem or will this mess up the mover?