Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by strike

  1. Just map a directory in RAM to a directory inside he container, then choose that directory in the emby transcode settings. I have mine mapped like this:
  2. Afaik the LabelPlus plugin is a more enhanced 3rd party version of the label plugin included with deluge. I only need the label plugin which is included in v1. Are you saying this plugin doesn't work either?
  3. I haven't really been following this thread after the v2 release, but can anyone using v2 confirm if the label plugin is working? That's really the only plugin I need.
  4. https://forums.unraid.net/topic/53520-support-linuxserverio-ombi/?do=findComment&comment=771317
  5. Known issue, fixed in next unraid release: https://forums.unraid.net/bug-reports/stable-releases/dockers-wanting-to-update-but-dont-in-the-end-r618/
  6. Yes, that's the one. I've haven't tested it but I think if you have this you'll be fine even on the first big transfer (I think). But I want to rephrase my previous statement a little. In my previous statements I've been saying this is an issue only on the first big transfer, but that's not correct. Once again to be clear this issue happens if your split level is not set to "Automatically split any directory as require?" and you're trying to rsync a large batch of files. It don't even have to be that large of a transfer to trigger this. What happens when you do a rsync transfer is that rsync will create ALL the folders on the first available disk according to allocation method BEFORE transferring any files. Split level is about trying to keep files together so they don't end up on different disks, so if you have the wrong split level unraid will try to force some files to a disk which is already full, trying to keep the files together according to set split level. Since this is just a backup I wouldn't worry about where files end up anyway. So if you take those two paragraphs and put them together you'll see that this issue can happen on even a small transfer. Lets says you have transferred all your files and in 2 months time you have 500 new folders with pictures in them, roughly 5 gig in total. Which you want to back up. The next disk in line to get new files based on allocation method is disk 3, which only has 2 GB available space left. Your split level is set wrong and because of that and this rsync issue all 500 folders will be created on disk 3, even if some of the files in those folders won't make it to disk 3 because it's completely full by the time rsync gets to them and it won't choose another disk either because split level goes before everything, so the transfer will just fail with out of space errors. So for this to be set it and forget it you will have to choose the first split level like mentioned, you will have to set a min free limit so that unraid will choose another disk if that disk has less space then the limit. Since this is just a backup I think I would set the allocation method to fill up. If you're not concerned about evenly filling the disks that is. I try to keep at least 30 GB of available space left on all my disks.
  7. The password for the webui is "deluge". The auth file is for configuring the user/pass for connecting to the deluge daemon (thin client).
  8. Any news on adding the geoip2 module? I see from this link that @aptalca submitted a PR https://gitlab.alpinelinux.org/alpine/aports/issues/10068 Edit: Maybe I should try and update the container, clicking the PR link I see it was added to 3.10. Yup, update was all that was needed. I love you guys! 😍
  9. I was under the impression that it would always speed up writes (assuming all disks has good read performance) but I haven't really used turbo write so I'll take your word for it
  10. First of all, NEVER copy form a disk share to a user share or vice-versa, this can result in data loss. Goggle: site:forums.unraid.net user share copy bug (first link) Second, the reason it's reading from all your disks is because you have RECONSTRUCT WRITE (aka turbo write) enabled. This will increase write speed not decrease it BUT will always read from all your disks. Google: site:forums.unraid.net turbo write (first link) I don't have time to explain either of these right now so I suggest you google it. If you use the above search terms and click the first link it will explain all you need to know.
  11. There's a plugin which have the feature you want: https://forums.unraid.net/topic/77302-plugin-disk-location/
  12. You need this: https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/
  13. Sorry for the late reply, but in case you haven't been using your google-fu the exact command is tar -xvf example.tar FolderName/ This will extract the folder named "FolderName" and all it's content to the directory you're currently in.
  14. So you're saying there actually was a leak? LOL, then I actually have to apologize The whole story didn't add up to me, but ok..
  15. I'm sorry, I'm not trying to be an ass, but sounds like BS to me. How do you know it's not working? Since you asked how you could check for IP leakage I'm guessing you don't have the knowledge to do so yourself? So what changed between your first and second post? Did you read up on how to use wireshark or something and actually test it? And I find it very unlikely that between your docker backup this morning and the supposed leakage you got some letter from your ISP delivered to you by express mail (or a drone maybe). Because how else would you know it was an IP leak when you don't know how to test it? If my assumption is wrong I apologize, but your story sounds like total BS to me. If not, you surely have some proof of your theory?
  16. You could do it from the command line I don't remember the exact command, but google should know. You could also extract it using winrar or something, but then you might screw up some permissions and/or symlinks so to be on the safe side I would do it from the command line.
  17. I ran some tests myself and I got about the same speed as you. I did, however, see about -20MB/s difference in writes to the array from what that test was showing and the speed reported in the dashboard in the webui. I don't know which is more correct. Either way, the speed we're both seeing is about what can be expected in unraid. So based on your tests your disks is working fine (for writes) and any hardware issues can be ruled out. So that leaves us with software, network or any overhead that might be between the two and unraid itself. And all we did was test the write speed of the disks and since you were also copying from and to your array in your earlier tests we also need to test the read performance of your disks to rule out any disk problems. Because if there's an issue reading from one or more of your disks it will sometimes have a major impact on the write speed to the array (using turbo write) since it requires reading from all of the disks. It can also have an impact copying from your array to cache or any other disks outside the array and parity-check speed as well. You can test the read performance of your disks with the diskspeed docker container.
  18. No, that doesn't look right... The speed is way too high. Are you sure you entered a directory on the cache drive in the terminal, then ran the command? To me, it looks like you maybe ran the command on a directory which lives in RAM, which will give you that kind of high speed. And btw, this command does not test raw disk speed, it tests the actual write speed, and you will see both of the parity drives updating if writing to the array. I'm not sure on how familiar you are with the command line, but you either need to use the cd command and type the path manually or you can do what I usually do when I'm not sure where I want to go (or I don't remember the path), use midnight commander (by typing mc and hit enter) to navigate to the right directory then quit MC by pressing F10, which will always leave you in the directory you where last in and then the path is filled in for you on the command line. So to simplify things use MC, navigate to the right path, quit MC then run the above command. And to able able to run the command on a specific disk in the array I think you need to enable disk shares. If you want to know exactly what the command does you can see this link which explains the use case pretty simple: https://skorks.com/2010/03/how-to-quickly-generate-a-large-file-on-the-command-line-with-linux/ Edit: About disk shares, if you've not used it before or are not completely aware of the "user share copy bug" it's best not to enable it. But if you do enable it, be sure to NEVER copy anything from a user share to a disk share and vice versa.
  19. Thanks, love this community I'm learning something new almost every day! I will try to lower it even further to test. Can't really say I remember changing the value so current value most be default (on my system anyway). Any other thoughts if my tests fail? Why does the performance hit (for the most part) the 4 last cores? Or does it just have to do with the fact that there is a VM running on them and somehow VM's get a massive performance hit during dual parity -check? I mean if I take the workload on the 4 first cores into consideration and "add it" to the last 4 cores it should not have impacted the VM that heavily.
  20. The reason I asked about the MB was mainly to know if the drives are connected to SATA2 or 3 ports. And you didn't answer @testdasi question about TRIM on the cache drive. Do you have it regularly scheduled or has it not been trimmed in a while? I don't have time to log at the diags right now, but let's do some simple performance tests without any network, docker containers and what not involved to rule out some things. Your cache drive should absolutely perform better then what your tests are showing. SSH in or open terminal in the webui, navigate to a cache only share on the cache drive and run this command: dd if=/dev/zero of=file.txt count=5k bs=1024k This will write a 5GB file to the drive and give you some stats, please post the output here. Run it 3-4 times to get the average and post the result. Do the same for the array drives, first directly to a single disk, then a user share which does not has the use cache set to: yes and post the results. I don't know if it makes a difference to write to a specific drive or a user share (it shouldn't), but it's fun to test anyway. Also do the same test to the array with turbo write enabled. It was not clear from your posts but it seemed like you did most of the test on the cache drive, and turbo write only works on the array. Been a while since I've done any test on my system but I remember vaguely something about the speed you say you've had earlier around 50MB/s to the array, without turbo write. The cache drive I think had around 3-400MB/s.
  21. You are providing too little information for anyone to able to figure out the issue. Please attach full diagnostic .zip in your next post. How are the drives connected? To an HBA card or to the MB ports? If connected to the MB, have you made sure it's connected to a SATA3 port and not a SATA2 port? Many MB have both, old ones in particular. When you say new/different MB with dual CPU I automatically assume older server grade MB/CPU, because that's what most dual CPU users around here use (what I've seen anyway). Unless you spent a lot of $$$.. And I have to guess since there's not any info. You should also fix the spamming in your syslog, if it's the "SAS drive spin down bug" causing that like you said you should disable spin down for the parity drives if they are unable to spin down anyway. And disk 29 is the parity2 drive FYI.
  22. Well, after resuming the parity-check again and keeping an eye on the CPU usage I'm not entirely convinced on your theory. The total CPU usage never goes over 55%. Idle when the parity-check is not running the usage is about 20%. I don't have any emby users on right now but my VM started to lag/freeze almost the second I resumed the parity-check. The 4 first cores are barely being used (10-30%), occasionally some of the threads spikes to about 70% for one second. The 4 last cores which my VM uses is another story though. 3-4 of the total 8 threads are almost constantly at 100%, it goes down for a few seconds occasionally but right back to 100%. Which threads it is varies, but it seems to for the most part be one full core and one-two HT. And the others spikes as well, but they never stay at 100%. My emby container only uses the first 4 cores and the vm uses as I said the last 4. Every other container uses, for the most part, the first core, as they don't do any heavy lifting anyway. I've never heard anything from my emby users about lag/stop when I'm not running a parity-check So I'm assuming 4 cores (even when transcoding) is enough. I use high priority on transcoding so the 4 cores will have some workout, but only for about 1-2 min until the whole movie is done transcoding. I can see that maybe be a problem for the emby users when the parity-check is running, but it should only last like I said 1-2 min. But my emby users are reporting stop/lag very frequently and that was long after it was finished transcoding too. So I don't think it has anything to do with the transcoding either. I even think they reported lag when there was no transcoding involved too IIRC, but I might be wrong about that. I have no Idea how this works either but from what I'm seeing it can use all cores. And had it only been one I would have thought it would favor the first core? But yeah, as I said the first 4 cores are barely being used. For the emby problem, I guess I could change the transcode priority to low to check if that solves it. It would just take longer to transcode the whole movie. But I have no idea on how to solve the VM issue.. Any thoughts? I could maybe reverse the workload (put the VM on the first 4 cores and emby on the last 4) and see if that helps. But I can't see any logical reason why that would work either, unless the parity-check heavily favors the last 4 cores.. I could work around it all by using the parity-check tuning plugin of course and I most certainly will, but I would like to know the root cause so if I have to I can leave the parity check running regardless of what the server is doing (well not any gaming or heavy use of course). What happens when I need to rebuild a disk for example, is my server unusable for like 20 hours?
  23. I may have missed something as I only skimmed through your post. Or I misunderstood something. But to me, it seemed like the only way you got it to work was when you used the root user (which is the way it's supposed to work) and when you tried to put in another user it didn't work because it still used root (?). That's the way I read it anyway. So to me, your statement is wrong. You can only get it to work the way it's supposed to, but not the way you want to would be more correct imo.
  24. Then I understand, as I said I'm no nginx expert and never had the usecase to try this. But I still I can't see how this app would know that you're using nginx reverse proxy and that header. I'm also not a programmer but I don't think this app supports what you're trying to do.
  25. @mikeydk I've only skimmed the last posts, but I see you mention that nginx takes care of the login and you also mention other users than root. I assume by nginx you mean reverse proxy? Can you explain what you think the reverse proxy part of nginx is doing in your setup? How exactly do you tell nginx what user to log into unraid with? You say that nginx runs as root, and it will always log into unraid with root no matter what user you use. how do you do that? Do you change the PUID,PGID of the container? And do you expect it to then log into other services with the user you run the container as? Either I'm missing something or maybe you have misinterpreted on how "nginx" (reverse proxy) works. What the reverse proxy does is serves you your local services through the internet via a domain. So what happens when you go to your domain/IP it serves the unraid webui as you were sitting locally. And since the only user that can log into the webui is root how do you expect to log into "nginx" with another user than root and except "nginx" to magically put in the root user and password? I'm no nginx expert, so I'd like to know how you do this. I see you mention .htpasswd file all this does is serve up a simple login to get access to the service behind the reverse proxy, once logged in it will still serve the local login to the service if it has one. So essentially you have to log in twice, once with whatever user you set up in the htpasswd file, and once with the REAL username and password to the service. So if you understand all this I'm still at a loss on how you expect this app (or any app really) to log into the webui (which ONLY accepts the root username and password) with another user. I don't care what magic the app (nginx) does, if it doesn't put the root username and password into the unraid webui login box it isn't going to work, as simple as that.