Jump to content

strike

Members
  • Content Count

    417
  • Joined

  • Last visited

Community Reputation

39 Good

About strike

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. https://forums.unraid.net/topic/53520-support-linuxserverio-ombi/?do=findComment&comment=771317
  2. Known issue, fixed in next unraid release: https://forums.unraid.net/bug-reports/stable-releases/dockers-wanting-to-update-but-dont-in-the-end-r618/
  3. Yes, that's the one. I've haven't tested it but I think if you have this you'll be fine even on the first big transfer (I think). But I want to rephrase my previous statement a little. In my previous statements I've been saying this is an issue only on the first big transfer, but that's not correct. Once again to be clear this issue happens if your split level is not set to "Automatically split any directory as require?" and you're trying to rsync a large batch of files. It don't even have to be that large of a transfer to trigger this. What happens when you do a rsync transfer is that rsync will create ALL the folders on the first available disk according to allocation method BEFORE transferring any files. Split level is about trying to keep files together so they don't end up on different disks, so if you have the wrong split level unraid will try to force some files to a disk which is already full, trying to keep the files together according to set split level. Since this is just a backup I wouldn't worry about where files end up anyway. So if you take those two paragraphs and put them together you'll see that this issue can happen on even a small transfer. Lets says you have transferred all your files and in 2 months time you have 500 new folders with pictures in them, roughly 5 gig in total. Which you want to back up. The next disk in line to get new files based on allocation method is disk 3, which only has 2 GB available space left. Your split level is set wrong and because of that and this rsync issue all 500 folders will be created on disk 3, even if some of the files in those folders won't make it to disk 3 because it's completely full by the time rsync gets to them and it won't choose another disk either because split level goes before everything, so the transfer will just fail with out of space errors. So for this to be set it and forget it you will have to choose the first split level like mentioned, you will have to set a min free limit so that unraid will choose another disk if that disk has less space then the limit. Since this is just a backup I think I would set the allocation method to fill up. If you're not concerned about evenly filling the disks that is. I try to keep at least 30 GB of available space left on all my disks.
  4. The password for the webui is "deluge". The auth file is for configuring the user/pass for connecting to the deluge daemon (thin client).
  5. Any news on adding the geoip2 module? I see from this link that @aptalca submitted a PR https://gitlab.alpinelinux.org/alpine/aports/issues/10068 Edit: Maybe I should try and update the container, clicking the PR link I see it was added to 3.10. Yup, update was all that was needed. I love you guys! 😍
  6. I was under the impression that it would always speed up writes (assuming all disks has good read performance) but I haven't really used turbo write so I'll take your word for it
  7. First of all, NEVER copy form a disk share to a user share or vice-versa, this can result in data loss. Goggle: site:forums.unraid.net user share copy bug (first link) Second, the reason it's reading from all your disks is because you have RECONSTRUCT WRITE (aka turbo write) enabled. This will increase write speed not decrease it BUT will always read from all your disks. Google: site:forums.unraid.net turbo write (first link) I don't have time to explain either of these right now so I suggest you google it. If you use the above search terms and click the first link it will explain all you need to know.
  8. There's a plugin which have the feature you want: https://forums.unraid.net/topic/77302-plugin-disk-location/
  9. You need this: https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/
  10. Sorry for the late reply, but in case you haven't been using your google-fu the exact command is tar -xvf example.tar FolderName/ This will extract the folder named "FolderName" and all it's content to the directory you're currently in.
  11. So you're saying there actually was a leak? LOL, then I actually have to apologize The whole story didn't add up to me, but ok..
  12. I'm sorry, I'm not trying to be an ass, but sounds like BS to me. How do you know it's not working? Since you asked how you could check for IP leakage I'm guessing you don't have the knowledge to do so yourself? So what changed between your first and second post? Did you read up on how to use wireshark or something and actually test it? And I find it very unlikely that between your docker backup this morning and the supposed leakage you got some letter from your ISP delivered to you by express mail (or a drone maybe). Because how else would you know it was an IP leak when you don't know how to test it? If my assumption is wrong I apologize, but your story sounds like total BS to me. If not, you surely have some proof of your theory?
  13. You could do it from the command line I don't remember the exact command, but google should know. You could also extract it using winrar or something, but then you might screw up some permissions and/or symlinks so to be on the safe side I would do it from the command line.
  14. I ran some tests myself and I got about the same speed as you. I did, however, see about -20MB/s difference in writes to the array from what that test was showing and the speed reported in the dashboard in the webui. I don't know which is more correct. Either way, the speed we're both seeing is about what can be expected in unraid. So based on your tests your disks is working fine (for writes) and any hardware issues can be ruled out. So that leaves us with software, network or any overhead that might be between the two and unraid itself. And all we did was test the write speed of the disks and since you were also copying from and to your array in your earlier tests we also need to test the read performance of your disks to rule out any disk problems. Because if there's an issue reading from one or more of your disks it will sometimes have a major impact on the write speed to the array (using turbo write) since it requires reading from all of the disks. It can also have an impact copying from your array to cache or any other disks outside the array and parity-check speed as well. You can test the read performance of your disks with the diskspeed docker container.
  15. No, that doesn't look right... The speed is way too high. Are you sure you entered a directory on the cache drive in the terminal, then ran the command? To me, it looks like you maybe ran the command on a directory which lives in RAM, which will give you that kind of high speed. And btw, this command does not test raw disk speed, it tests the actual write speed, and you will see both of the parity drives updating if writing to the array. I'm not sure on how familiar you are with the command line, but you either need to use the cd command and type the path manually or you can do what I usually do when I'm not sure where I want to go (or I don't remember the path), use midnight commander (by typing mc and hit enter) to navigate to the right directory then quit MC by pressing F10, which will always leave you in the directory you where last in and then the path is filled in for you on the command line. So to simplify things use MC, navigate to the right path, quit MC then run the above command. And to able able to run the command on a specific disk in the array I think you need to enable disk shares. If you want to know exactly what the command does you can see this link which explains the use case pretty simple: https://skorks.com/2010/03/how-to-quickly-generate-a-large-file-on-the-command-line-with-linux/ Edit: About disk shares, if you've not used it before or are not completely aware of the "user share copy bug" it's best not to enable it. But if you do enable it, be sure to NEVER copy anything from a user share to a disk share and vice versa.