Jump to content

trurl

Moderators
  • Posts

    43,905
  • Joined

  • Last visited

  • Days Won

    137

Posts posted by trurl

  1. As for your third question, that is a case of a simple question that doesn't have a simple answer. Not because the answer is too hard necessarily, but because the question is too simple.

     

    In other words, it depends on a lot of factors which you haven't specified.

     

    Also, I don't have much experience with that either. I just have a simple Ubuntu VM that I seldom play with and it only uses VNC for display.

     

    Most of my needs are handled by dockers.

  2. I don't have any experience with synology.

     

    Unraid IS NOT RAID. It is similar in some ways, in that it does allow one or two parity disks. But there is no striping. Each data disk in the array is an independent filesystem. Each file exists completely on a single disk. Folders can span disks (User Shares).

     

    Because there is no striping, read speed is at the speed of the single disk the file is on. Write speed is somewhat slower than single disk speed, since parity is realtime and must also be updated.

     

    But, because each disk is independent, each disk can be easily read on any Linux independently of any other disk. You can use different sized disks in the array, and easily replace or add disks without rebuilding the whole array.

     

    Each parity disk must be at least as large as any single data disk. Single parity allows you to recover a single drive, dual parity allows you to recover 2 drives simultaneously.

     

    Note that parity, whether RAID or Unraid, is NEVER a backup. Parity is basically the same concept wherever it is used in computing and communications. Parity is simply an extra bit that allows a missing bit to be calculated from all the other bits. So, all those other bits (disks) are required to recover the missing bits.

     

    This wiki overview is a good start to understanding Unraid:

     

    https://wiki.unraid.net/UnRAID_6/Overview

     

  3. 21 minutes ago, dsimmond1225 said:

    Do I just need copy over license file after update....using different USB drive so just in case.

    Might be simpler to try it on the existing flash first unless you think there is some problem with that. The license is associated with the flash it is on.

     

    If you do decide to use a new flash, you can do a license transfer that will give a new license key file for the new flash. See the REPLACE REGISTRATION KEY link at the bottom of this page

  4. 2 hours ago, j_mcarter2 said:

    recognize the new array

    Just make note of the disk assignments. Then you can do a clean install of the new version on the old flash (or a new flash and transfer the key). Then assign the disks as they were, taking care to not assign any data disk to the parity slot.

  5. Just now, dsimmond1225 said:

    will try that.  should this also fix my PLEX issues or is this another nightmare?

    I would say the upgrade is unlikely to fix your plex issue, but we don't really know what the issue is since you haven't told us much about it. Sounds like you have some problem reaching the internet with your server, and that can cause certain kinds of plex problems.

     

    Your docker image is a little smaller than I normally recommend, but it is likely OK that way if you don't run many dockers.

  6. @aneelley

    21 hours ago, aneelley said:

    I am wondering what happens when it fills up.

    Thought I should come back to this point. You must try to avoid filling any disk.

     

    Each user share has a Minimum Free setting.

     

    Unraid has no way to know when it chooses a disk to write a file, how large the file will become. If a disk has more than Minimum Free, Unraid can choose the disk. If the disk has less than Minimum Free, it will choose another disk.

     

    The general recommendation is to set Minimum Free to larger than the largest file you expect to write to the User Share.

     

    Cache also has a Minimum Free, in Global Share Settings. It works in a similar manner. If cache has less than Minimum, Unraid will choose an array disk instead (overflow), provided that the User Share being written is cache-prefer or cache-yes.

     

    Note that in any case, the choosing is done before writing begins, and once a disk is chosen, it will attempt to write the entire file to that disk. If it doesn't fit, the disk runs out of space and the write fails.

     

    To give an example, Minimum is set to 10G, the disk has 11G free, you write a 9G file. Unraid can choose the disk. It may choose another depending on other factors (Allocation Method, Split Level), but if it does choose the disk, it will write the 9G file to the disk. After that, the disk will only have 2G free, which is below Minimum, so Unraid won't choose the disk again until it has more than Minimum.

     

    Another example, Minimum is 10G, disk has 15G free, you write a 20G file. Unraid can choose the disk since it has more than Minimum. If it does choose the disk, it will write the file until the disk is completely full, then the write will fail.

  7. 1 minute ago, dsimmond1225 said:

    Any suggestions to get back up and running would be helpful

    Go to Tools-diagnostics and attach the complete Diagnostics zip file to your NEXT post. 

     

    I may move your post and any responses to its own thread since you may have a problem unrelated to the upgrade. 

  8. Parity isn't required (and a drive assigned to parity cannot contain data).

     

    Since you plan to retire synology and reuse its disks, I have to mention this. You must always have another copy of anything important and irreplaceable. Even with parity, you need backups.

  9. On 4/17/2020 at 9:08 AM, alexdodd said:

    I've checked here and the linuxserver.io docs, and I can't see exactly what the media path is for.  I don't plan on a reverse proxy to it anytime soon, and i just want it to host music and playlists. 

    From those same docs:

    Quote

    docker create \
      --name=airsonic \
      -e PUID=1000 \
      -e PGID=1000 \
      -e TZ=Europe/London \
      -e CONTEXT_PATH=<URL_BASE> `#optional` \
      -e JAVA_OPTS=<options> `#optional` \
      -p 4040:4040 \
      -v </path/to/config>:/config \
      -v </path/to/music>:/music \
      -v </path/to/playlists>:/playlists \
      -v </path/to/podcasts>:/podcasts \
      -v </path/to/other media>:/media `#optional` \
      --device /dev/snd:/dev/snd `#optional` \
      --restart unless-stopped \
      linuxserver/airsonic

    So, as you can see, the media path is optional

  10. 1 hour ago, jschultz070 said:

    ok thanks for that. I thought the OS would make sure that drive or path would be mounted before the docker started.

    Thanks for the clarification. 

    Further clarification. It is not about whether the drive is mounted before "the docker" starts. It is about whether or not the drive is mounted before the docker service starts. If the drive isn't mounted when the docker service starts (basically at array startup), then the slave option is needed.

  11. 10 minutes ago, Philaudio said:

    3 HDD 1tb for 1 HDD 4TB for the safeguard?

    Parity only has to be as large as the largest single data disk. So with 3x1TB array data disks, a 1TB parity disk provides all the protection and a larger parity disk provides no additional protection, though it will allow you to use larger data disks in the future. For example if you already have 4TB parity, you can add or replace disks with any disk up to 4TB in size.

     

    15 minutes ago, Philaudio said:

    make two compositions each time 3 HDD of 1TB for 1 HDD 4TB

    Unraid only allows one array, many more than 8 disks can be in that array, with up to 2 parity disks.

     

    You can also have a cache pool with various btrfs raid configurations. People often use SSDs for the cache pool.

  12. Docker image size is holding steady. You just need to figure out the best way to configure your system to use cache.

    7 minutes ago, aneelley said:

    I stopped all containers and it is slowly going back down.  I have it running on the hour.  There were lots of downloads that nzbget was processing.

    While nothing is downloading then of course mover can get things off cache quickly enough because nothing is being added. But running mover every hour isn't really the solution, unless you also intend to stop adding to cache every hour as well.

     

    Mover is intended for idle time. There is simply no way to move from the faster cache to the slower array as quickly as you can write new data to cache. Running mover at the same time you are writing to cache just makes everything slower, including mover itself since it is competing with those writes for access to the drives.

     

    A simple strategy would be to not cache any downloads or their postprocessing. Then obviously you won't fill cache.

     

    But another strategy would be to cache the downloads, but send the postprocess results directly to the array. So in the case of NZBGet, you would download the intermediate files to a cache-yes user share, but send the completed files to a cache-no user share. This would also have the advantage of doing the postprocess reading from cache but the postprocess writes to a different disk, so those reads and writes aren't competing.

     

    Here is how this is described on the Paths page in NZBGet Settings:

    Quote

    InterDir

    Directory to store intermediate files.

    If this option is set (not empty) the files are downloaded into this directory first. After successful download of nzb-file (possibly after par-repair) the files are moved to destination directory (option DestDir). If download or unpack fail the files remain in intermediate directory.

    Using of intermediate directory can significantly improve unpack performance if you can put intermediate directory (option InterDir) and destination directory (option DestDir) on separate physical hard drives.

    So, in the end, the final results (video files) are already on the array where they don't need to be moved, and the intermediate results (downloads, etc.) are removed from cache.

     

     

     

     

     

  13. OK, I just looked at sonarr (as I said I'm a new user). I mostly use radarr, but they are basically the same. It doesn't look like there is anywhere to set any paths, so I think it just relies on working with the other applications.

     

    If you don't get any errors then let it work for a while and check that files are showing up in the user shares they are supposed to. Then get new diagnostics so we can confirm your docker image isn't growing.

     

×
×
  • Create New...