EMKO

Members
  • Posts

    186
  • Joined

  • Last visited

Everything posted by EMKO

  1. ^^^ THIS IS TRUTH ^^^ Why isn't it working, then? It is working. The multi-drive cache pool feature has nothing to do with the btrfs bug so I don't understand your point. cache drives are useless at the moment they cause massive slow downs on the drives speed, since Unraid can't understand just to move files on the cache drive instead of coping them. What happens when a file is on the Cache Drive and a program moves it to a CACHED share the file should just move quickly from one folder on the same drive to another instead Unraid reads and writes the file on the same hard drive causing the harddrive to use all of its speed. When this happens any app running on the drive like Plex end up buffering etc. If Unraid can fix it so the files just get moved, cache drives would be useful but right now i cant use them. Let's not label a feature useless just because your use case has an issue. Cache drives improve write performance to user shares during real-time write operations, then migrate data to array devices on a schedule. Moving data from one area of cache to another is not the atypical function for cache. Dockers run on cache drive no? and when a app needs to move a file so that it will be migrated to the array? we get slow downs and unnecessary wearing of the drive. Unraid should be able to know the file is already on the cache drive and move it instead of copying it over to another folder? Why does the file need to be moved to begin with? Why aren't you setting docker volumes to write data to cache enabled user shares directly? You should not be saving files inside the containers themselves. what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.
  2. ^^^ THIS IS TRUTH ^^^ Why isn't it working, then? It is working. The multi-drive cache pool feature has nothing to do with the btrfs bug so I don't understand your point. cache drives are useless at the moment they cause massive slow downs on the drives speed, since Unraid can't understand just to move files on the cache drive instead of coping them. What happens when a file is on the Cache Drive and a program moves it to a CACHED share the file should just move quickly from one folder on the same drive to another instead Unraid reads and writes the file on the same hard drive causing the harddrive to use all of its speed. When this happens any app running on the drive like Plex end up buffering etc. If Unraid can fix it so the files just get moved, cache drives would be useful but right now i cant use them. Let's not label a feature useless just because your use case has an issue. Cache drives improve write performance to user shares during real-time write operations, then migrate data to array devices on a schedule. Moving data from one area of cache to another is not the atypical function for cache. Dockers run on cache drive no? and when a app needs to move a file so that it will be migrated to the array? we get slow downs and unnecessary wearing of the drive. Unraid should be able to know the file is already on the cache drive and move it instead of copying it over to another folder?
  3. The multi-drive cache pool feature has nothing to do with the btrfs bug so I don't understand your point. cache drives are useless at the moment they cause massive slow downs on the drives speed, since Unraid can't understand just to move files on the cache drive instead of coping them. What happens when a file is on the Cache Drive and a program moves it to a CACHED share the file should just move quickly from one folder on the same drive to another instead Unraid reads and writes the file on the same hard drive causing the harddrive to use all of its speed. When this happens any app running on the drive like Plex end up buffering etc. If Unraid can fix it so the files just get moved, cache drives would be useful but right now i cant use them.
  4. any work around to fix this problem? every time this type of move happens it kills the hard drive performance as it is reading and writing back the same file to the same disk and this causes issues with plex transcoding for me. anyway to trick the software's to use /mnt/cache/tv shows and be able to see /mnt/user/tv shows content inside of this folder? or would this mess up the mover?
  5. 1. speed is the same it's still copying the file data over instead of just moving it, doing a mv cache/downloads/file to cache/media/file is instant 2./mnt/ and the config is the only mappings. 3. changing it so that the app uses cache/tv/ will make it instant but then we can't have the app see all the other files as they are on user/tv/ :example setting a tv show in sonarr to be on the "cache/tv/tv show name" when file gets download it will be moved instant.
  6. /usr/local/sbin/plugin install https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg 2>&1 plugin: installing: https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg plugin: downloading https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg plugin: running: 'anonymous' plugin: creating: /tmp/unRAIDServer.zip - downloading from URL https://s3.amazonaws.com/dnld.lime-technology.com/beta/unRAIDServer-6.0-beta12-x86_64.zip plugin: running: 'anonymous' Archive: ../unRAIDServer.zip inflating: bzimage inflating: bzroot creating: config/ extracting: config/go inflating: config/network.cfg extracting: config/disk.cfg inflating: config/share.cfg inflating: config/ident.cfg creating: config/plugins/ inflating: license.txt inflating: make_bootable.bat inflating: make_bootable_mac inflating: memtest inflating: readme.txt creating: syslinux/ inflating: syslinux/menu.c32 inflating: syslinux/ldlinux.c32 inflating: syslinux/syslinux.cfg- inflating: syslinux/mbr.bin inflating: syslinux/mboot.c32 inflating: syslinux/syslinux.cfg inflating: syslinux/syslinux.exe inflating: syslinux/syslinux inflating: syslinux/libutil.c32 inflating: syslinux/make_bootable_mac.sh inflating: xen syncing... Update successful - PLEASE REBOOT YOUR SERVER Warning: simplexml_load_file(): I/O warning : failed to load external entity "/tmp/plugins/unRAIDServer.plg" in /usr/local/emhttp/plugins/plgMan/plugin on line 165 Warning: copy(/tmp/plugins/unRAIDServer.plg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/plgMan/plugin on line 420 plugin: installed looks like it works thanks
  7. can i update from the plugins page on the web ui?
  8. this is how how i fixed this problem edit with admin C:\Windows\System32\drivers\etc\lmhosts add at the bottom 192.168.1.121 Tower #PRE
  9. i found one nzbdrone that works correct https://registry.hub.docker.com/u/aostanin/nzbdrone/ i am using this until needo's docker gets fixed.
  10. also backup the flash, just in case a upgrade goes bad
  11. I am running Needo's NzbDrone docker everything that nzbdrone does works fine, its just the UI will not auto update because it can never GET http://tower:8015/signalr/negotiate?_=1415764771143 500 (Internal Server Error) so clicking something like RSS SYNC will show on bottom right [rsssync] starting but will never show the progress or when its done you just have to refresh, any UI element that does this like automatic search button/manual search,calender,activities etc nothing will auto update to see changes you have to refresh browser your self. This also prevents click a few episodes to download at a time as it will only accept one you have to refresh for each automatic search action. docker aostanin/nzbdrone works the UI functions correctly but its running as root and i have no idea how to configure to install to where i want it. right now i went back to nzbdrone plugin as that works but i would prefer a nzbdrone docker that works. is this a docker limitation?
  12. I'm not sure what you're referring to? The versions from needo and gfjardim should both unrar fine. Neilt0 has posted about an issue that's specific to very long file names, but I have yet to run into this problem during normal operation. ahh sorry i didn't know that i am still on 5.0.5 just trying to find out what i need to setup when i install 6 whats the difference between needo and gfardim versions? I prefer the gfjardim version because you can use the Upgrade inside the Nzbget web-ui, and you can even switch between stable, beta, development branch. Maybe the needo version now support that too, but I had problems before and since I switched to gfjardim one, it works. Thanks that does sound awesome
  13. I'm not sure what you're referring to? The versions from needo and gfjardim should both unrar fine. Neilt0 has posted about an issue that's specific to very long file names, but I have yet to run into this problem during normal operation. ahh sorry i didn't know that i am still on 5.0.5 just trying to find out what i need to setup when i install 6 whats the difference between needo and gfardim versions?
  14. is there a working docker for nzbget that will unrar ?
  15. EMKO

    HGST 8T HelioSeal

    soon the 10TB version will be out if this increase of capacity keeps going up is there any way to increase parity checks speeds? or are we forever going to keep getting longer and longer checks.
  16. EMKO

    HGST 8T HelioSeal

    how long is a parity check on one of these LOL
  17. yes please, without it accessing a single file wakes up the whole array with it on it will wake up only the drive the file is on and just browsing the shares is very smooth and fast. Even though cache_dirs is a weird fix it does work and at least make 6 have it until or if you guys can figure out a proper way to do it.
  18. i don't know if this is the same problem but i also get movie files freezing on 5.0.5, what i found that fixes the problem is not using //tower but instead //192.168.1.121 the ip for the server i don't know why this works but i don't get files freezing anymore.
  19. the downloading works its just that its slower looks like some of the connections timeout after sometime because of this. I will wait a few more days see what happens THanks
  20. Smart idea, i just installed nzbget on a windows machine i get the same Could not read from TLS-Socket: Connection closed by remote host does this mean its the Usenet provider or could this be a router issue? Thanks
  21. TLS handshake failed anyways i installed perl and ca-certificates i still get the same problem.
  22. yea i am using SSL to connect to a usenet provider that's where i see the errors and its causing it to hang on a connection making it download very slow now.
  23. sorry i have no idea how to do that, also why was Unraid working fine with SSL for years and just now its giving me problems?
  24. I didn't have this problem before it just started like a week ago, i get these errors. handshake failed Could not read from TLS-Socket: Connection closed by remote host any one know what this means and have i can fix this?