Jump to content

strike

Members
  • Content Count

    413
  • Joined

  • Last visited

Community Reputation

39 Good

About strike

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Any news on adding the geoip2 module? I see from this link that @aptalca submitted a PR https://gitlab.alpinelinux.org/alpine/aports/issues/10068 Edit: Maybe I should try and update the container, clicking the PR link I see it was added to 3.10. Yup, update was all that was needed. I love you guys! 😍
  2. I was under the impression that it would always speed up writes (assuming all disks has good read performance) but I haven't really used turbo write so I'll take your word for it
  3. First of all, NEVER copy form a disk share to a user share or vice-versa, this can result in data loss. Goggle: site:forums.unraid.net user share copy bug (first link) Second, the reason it's reading from all your disks is because you have RECONSTRUCT WRITE (aka turbo write) enabled. This will increase write speed not decrease it BUT will always read from all your disks. Google: site:forums.unraid.net turbo write (first link) I don't have time to explain either of these right now so I suggest you google it. If you use the above search terms and click the first link it will explain all you need to know.
  4. There's a plugin which have the feature you want: https://forums.unraid.net/topic/77302-plugin-disk-location/
  5. You need this: https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/
  6. Sorry for the late reply, but in case you haven't been using your google-fu the exact command is tar -xvf example.tar FolderName/ This will extract the folder named "FolderName" and all it's content to the directory you're currently in.
  7. So you're saying there actually was a leak? LOL, then I actually have to apologize The whole story didn't add up to me, but ok..
  8. I'm sorry, I'm not trying to be an ass, but sounds like BS to me. How do you know it's not working? Since you asked how you could check for IP leakage I'm guessing you don't have the knowledge to do so yourself? So what changed between your first and second post? Did you read up on how to use wireshark or something and actually test it? And I find it very unlikely that between your docker backup this morning and the supposed leakage you got some letter from your ISP delivered to you by express mail (or a drone maybe). Because how else would you know it was an IP leak when you don't know how to test it? If my assumption is wrong I apologize, but your story sounds like total BS to me. If not, you surely have some proof of your theory?
  9. You could do it from the command line I don't remember the exact command, but google should know. You could also extract it using winrar or something, but then you might screw up some permissions and/or symlinks so to be on the safe side I would do it from the command line.
  10. I ran some tests myself and I got about the same speed as you. I did, however, see about -20MB/s difference in writes to the array from what that test was showing and the speed reported in the dashboard in the webui. I don't know which is more correct. Either way, the speed we're both seeing is about what can be expected in unraid. So based on your tests your disks is working fine (for writes) and any hardware issues can be ruled out. So that leaves us with software, network or any overhead that might be between the two and unraid itself. And all we did was test the write speed of the disks and since you were also copying from and to your array in your earlier tests we also need to test the read performance of your disks to rule out any disk problems. Because if there's an issue reading from one or more of your disks it will sometimes have a major impact on the write speed to the array (using turbo write) since it requires reading from all of the disks. It can also have an impact copying from your array to cache or any other disks outside the array and parity-check speed as well. You can test the read performance of your disks with the diskspeed docker container.
  11. No, that doesn't look right... The speed is way too high. Are you sure you entered a directory on the cache drive in the terminal, then ran the command? To me, it looks like you maybe ran the command on a directory which lives in RAM, which will give you that kind of high speed. And btw, this command does not test raw disk speed, it tests the actual write speed, and you will see both of the parity drives updating if writing to the array. I'm not sure on how familiar you are with the command line, but you either need to use the cd command and type the path manually or you can do what I usually do when I'm not sure where I want to go (or I don't remember the path), use midnight commander (by typing mc and hit enter) to navigate to the right directory then quit MC by pressing F10, which will always leave you in the directory you where last in and then the path is filled in for you on the command line. So to simplify things use MC, navigate to the right path, quit MC then run the above command. And to able able to run the command on a specific disk in the array I think you need to enable disk shares. If you want to know exactly what the command does you can see this link which explains the use case pretty simple: https://skorks.com/2010/03/how-to-quickly-generate-a-large-file-on-the-command-line-with-linux/ Edit: About disk shares, if you've not used it before or are not completely aware of the "user share copy bug" it's best not to enable it. But if you do enable it, be sure to NEVER copy anything from a user share to a disk share and vice versa.
  12. Thanks, love this community I'm learning something new almost every day! I will try to lower it even further to test. Can't really say I remember changing the value so current value most be default (on my system anyway). Any other thoughts if my tests fail? Why does the performance hit (for the most part) the 4 last cores? Or does it just have to do with the fact that there is a VM running on them and somehow VM's get a massive performance hit during dual parity -check? I mean if I take the workload on the 4 first cores into consideration and "add it" to the last 4 cores it should not have impacted the VM that heavily.
  13. The reason I asked about the MB was mainly to know if the drives are connected to SATA2 or 3 ports. And you didn't answer @testdasi question about TRIM on the cache drive. Do you have it regularly scheduled or has it not been trimmed in a while? I don't have time to log at the diags right now, but let's do some simple performance tests without any network, docker containers and what not involved to rule out some things. Your cache drive should absolutely perform better then what your tests are showing. SSH in or open terminal in the webui, navigate to a cache only share on the cache drive and run this command: dd if=/dev/zero of=file.txt count=5k bs=1024k This will write a 5GB file to the drive and give you some stats, please post the output here. Run it 3-4 times to get the average and post the result. Do the same for the array drives, first directly to a single disk, then a user share which does not has the use cache set to: yes and post the results. I don't know if it makes a difference to write to a specific drive or a user share (it shouldn't), but it's fun to test anyway. Also do the same test to the array with turbo write enabled. It was not clear from your posts but it seemed like you did most of the test on the cache drive, and turbo write only works on the array. Been a while since I've done any test on my system but I remember vaguely something about the speed you say you've had earlier around 50MB/s to the array, without turbo write. The cache drive I think had around 3-400MB/s.
  14. You are providing too little information for anyone to able to figure out the issue. Please attach full diagnostic .zip in your next post. How are the drives connected? To an HBA card or to the MB ports? If connected to the MB, have you made sure it's connected to a SATA3 port and not a SATA2 port? Many MB have both, old ones in particular. When you say new/different MB with dual CPU I automatically assume older server grade MB/CPU, because that's what most dual CPU users around here use (what I've seen anyway). Unless you spent a lot of $$$.. And I have to guess since there's not any info. You should also fix the spamming in your syslog, if it's the "SAS drive spin down bug" causing that like you said you should disable spin down for the parity drives if they are unable to spin down anyway. And disk 29 is the parity2 drive FYI.
  15. Well, after resuming the parity-check again and keeping an eye on the CPU usage I'm not entirely convinced on your theory. The total CPU usage never goes over 55%. Idle when the parity-check is not running the usage is about 20%. I don't have any emby users on right now but my VM started to lag/freeze almost the second I resumed the parity-check. The 4 first cores are barely being used (10-30%), occasionally some of the threads spikes to about 70% for one second. The 4 last cores which my VM uses is another story though. 3-4 of the total 8 threads are almost constantly at 100%, it goes down for a few seconds occasionally but right back to 100%. Which threads it is varies, but it seems to for the most part be one full core and one-two HT. And the others spikes as well, but they never stay at 100%. My emby container only uses the first 4 cores and the vm uses as I said the last 4. Every other container uses, for the most part, the first core, as they don't do any heavy lifting anyway. I've never heard anything from my emby users about lag/stop when I'm not running a parity-check So I'm assuming 4 cores (even when transcoding) is enough. I use high priority on transcoding so the 4 cores will have some workout, but only for about 1-2 min until the whole movie is done transcoding. I can see that maybe be a problem for the emby users when the parity-check is running, but it should only last like I said 1-2 min. But my emby users are reporting stop/lag very frequently and that was long after it was finished transcoding too. So I don't think it has anything to do with the transcoding either. I even think they reported lag when there was no transcoding involved too IIRC, but I might be wrong about that. I have no Idea how this works either but from what I'm seeing it can use all cores. And had it only been one I would have thought it would favor the first core? But yeah, as I said the first 4 cores are barely being used. For the emby problem, I guess I could change the transcode priority to low to check if that solves it. It would just take longer to transcode the whole movie. But I have no idea on how to solve the VM issue.. Any thoughts? I could maybe reverse the workload (put the VM on the first 4 cores and emby on the last 4) and see if that helps. But I can't see any logical reason why that would work either, unless the parity-check heavily favors the last 4 cores.. I could work around it all by using the parity-check tuning plugin of course and I most certainly will, but I would like to know the root cause so if I have to I can leave the parity check running regardless of what the server is doing (well not any gaming or heavy use of course). What happens when I need to rebuild a disk for example, is my server unusable for like 20 hours?