Jump to content

JonathanM

Moderators
  • Posts

    16,671
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. 12 minutes ago, tunetyme said:

    Those of us who are old school think more along the lines of a parity bit used in each byte (9th bit) or data communications.

    That's still the case, but instead of calculating a parity bit for a serial sequence of 8 bits along the drive, think instead of adding up all of the bits at a specific address on all the drives, and putting the resulting parity bit at that same address on the parity drive. That way you can always recreate any single bit on a specific drive by using all the other drives plus the parity drive.

     

    The math is still the same as it ever was for parity calculation, just with an arbitrary number of drives in a column instead of a specific number of bits in a row.

  2. 1 minute ago, tunetyme said:

    I did not know about the legacy stuff on /mnt/diskx vs /mnt/user locations.  Is there something in the WIKI?? Is there any other legacy issues I am unaware of since I have jumped from 4.7 to 6.3.2?

    The preference to use /mnt/user (shares only) instead of /mnt/diskX (individual disks) is an ongoing thing since the "user share copy bug" became a better known issue. If you know how unraid generates user shares and understand why copying between /mnt/diskX/share to /mnt/user/share can cause data loss, it's easy to avoid. For less savvy users, it's easier just to tell them to ignore the individual disks and let unraid work its magic with user shares.

     

    As far as what you may have missed in the time between 4.7 and 6.3.2, there is no way for me to know what you have and haven't learned so far. So much has changed with the addition of notifications, dockers and VM's.

     

    One major point is that plugins are less and less supported for applications, pretty much if you can do the task with a docker, you shouldn't be using a plugin.

  3. 1 hour ago, shanelovell said:

    I went low cost in a lot of my build but refused to do that with the power supply.

    This, exactly. The safety of your data and hardware depends on clean power. It makes no sense to put a marginal power supply in a rig destined to hold your life's memories.

  4. 1 hour ago, tunetyme said:

    My 2 cents is that there must be a way to stream line this process.

    If you don't care which numbered slot your data (shares) are on, you can forego the new config and just round robin from largest to smallest drive. Start by copying and verifying the content of your largest drive to any of the other array drives by whatever method you want. When you are sure all the data on the largest drive is duplicated properly, stop the array, change the drive format from RFS to XFS, then start the array and format the drive. From then on it's just a matter of copying all the data from the next largest RFS drive to the newly emptied XFS drive, lather rinse repeat until done. No need for new configs, preclearing, anything. If you feel a need to change slot numbers after you are done, you can do a new config once and put the drives where you want them.

     

    If you have defined drive slot inclusions or exclusions for your shares, those will need to be updated after you are done, as well as any legacy stuff that references /mnt/diskX instead of properly referring to /mnt/user locations.

     

    The only reason the long convoluted process is there is to preserve the disk slot number allocation for that specific data.

     

    Note. I do NOT recommend moving the data from the RFS to the XFS disk, because a move involves a delete cycle after the copy. It will take WAY longer to do it that way, as RFS can take AGES to delete data. Much faster to simply copy the data, then format the RFS disk when you are sure the copy is complete (verification recommended).

  5. This specific plugin does not display the grey/green "Done" button at the bottom of the popup window after it updates. Doesn't seem to effect anything, and all my other plugin updates have the done button, so maybe it's a small syntax thing? I don't think it's my machine, it does it on 2 different servers.

  6. 24 minutes ago, 1812 said:

    When your cache drive fills up from moving moving files to the server or downloading, it pauses your VM's.

    Respectfully, that's what the min free space setting is for. As long as you use a share that is set to cache:prefer for stuff that you want to live on the cache drive and cache:yes for stuff that will be moved to the array, and have a proper setting for min free space, you will never run into this specific issue.

    • Upvote 2
  7. 26 minutes ago, brisimmons105 said:

    Thanks for the clarification!  I have seen the light. 

    Congrats!

    Now you can simplify all your docker mappings, for example, downloaders like nzbget or deluge only need access to their configs in appdata and your download location, wherever that is. fetchers and categorizers like sonarr and radarr only need access to their configs, the download location, and their destination folder trees. Player apps like emby and plex only need their configs and media destinations. The simpler you can make it, the easier it is to maintain and troubleshoot. Keep everything referencing /mnt/user locations if possible, never reference specific disks like /mnt/disk2 or /mnt/cache unless the container author mandates it for some reason.

  8. 6 minutes ago, danith said:

    Just a quick question.  Right now in sabnzbd I have the 'movies' category move the downloads to my movie folder.  I assume with Radarr, I was to just leave the downloaded movies in the download folder as Radarr will move it to my movies folder, right?

    I'm unclear on exactly what you are doing, but I'll make an assumption and say that yes, you probably want to let radarr do the final moving and renaming, especially since you can tell it to clean up the download and only move the files you want instead of all the scene stuff that tends to come along for the ride.

     

    Personally, my nzb/torrent apps don't have access to the final destination path, they only drop off the bundle for further processing by sonarr/radarr/cp/etc.

  9. 34 minutes ago, brisimmons105 said:

     

    Thanks jonathanm.  So forgive me if I'm missing something, but Host Path 2 for radar is "container path: /downloads" and that's where it should look for the completed downloads. In the sonarr config (according to the config overview), the "/data" folder is where it should look for the downloads folder.  Both the "container path: /downloads" and the "/data" folders are pointing to the same path "/mnt/cache/Media/"

    Ok, so you see what I am talking about though, right? /downloads =! /data. You must match BOTH the host and container paths in the two containers, so they are pointing to the same file both inside the container and outside.

     

    I assumed since you had /downloads mapped to MediaDL, that was where you wanted the downloads to live.

     

    For your data download location, pick a container path and an array location for your downloads, and use the same path for both for all your media containers. Either downloads or data or whatever, doesn't matter, just pick one and use it for everything. Don't map folders you don't need, and don't map subfolders for different apps. If you want to specify subfolders, do it inside the app itself, referencing the common path. For example, if you decide to map /downloads to /mnt/user/appdata/downloads (what I use) then you can have the nzb downloader put stuff in /downloads/nzb/incomplete and move it to /downloads/nzb/complete/tv, which shows up in the array as /mnt/user/appdata/downloads/nzb/complete/tv, but to all your media apps it's still /downloads/nzb/complete/tv

     

    Container mapping is one of those things that is extremely difficult to wrap your head around until suddenly it clicks and then it's easy.

  10. 12 minutes ago, brisimmons105 said:

    What’s funny is that Sonarr is setup the exact same way and it works fine.

    No it's not. Edit host path 2 for radarr and change the host path from cache to user, and container path to /MediaDL instead of /downloads

    Then it will be setup the exact same way.

  11. 14 minutes ago, BobPhoenix said:

    Not able to read CD drives D and E currently.

    Which atapi driver are you using? I vaguely remember about 10 different combinations of files that were used in config.sys and autoexec.bat to properly get a specific brand of cd drive to operate. There towards the end, when win95 was current, there was a "universal" driver released that worked with pretty much any hardware.

  12. 3 hours ago, itsrumsey said:

    Tried both of these to no avail as well.

    Two options. Wait until tomorrow, as long as your server is left on overnight it will be updated, or run just the

    docker exec EmbyServer update

    part of the command at the console prompt if you don't want to wait.

  13. 1 hour ago, pro9c3 said:

    So for about a week, I had a SSD that I use for my deluge downloads inside my array. Always wondering why it was so damn slow.

    If it was part of the parity protected array, it was limited by the speed of the parity drive, and couldn't be properly trimmed. The normal solution to this is to assign it to the cache slot, that way you maintain full share control and participation. Using UD to mount it will work to some extent, but that's not really the primary reason UD was developed.

  14. 51 minutes ago, nodeadlykittens said:
    1 hour ago, binhex said:

    have you set the folders correctly to reference /data in the deluge web ui?

    I have not, I wanted to keep it neat and keep it under my "download" folder.

    I take that to mean you removed the default mapping of /data, and added one for /download on the "app" side of the docker template volume mappings?

  15. 54 minutes ago, johnnie.black said:

    even with multiple cores hashing multiple files on the same disk concurrently will always be slower than one at a time.

    Depends on the hashing algorithm. If disk I/O and access time is only 1/20 of the total time required to hash and record the results, it would be faster to assign multiple cores to the same disk. It's all about processor math vs I/O speed.

     

    Granted, the current profile of hashing algorithms, disk I/O and processor speed is suited to 1 disk per thread, that's not an absolute.

  16. Just now, azteca25 said:

     

    Will the UnRaid OS staff update this thread with their findings as they do their testing

     

    Possibly, but keep in mind there have been several reports that linux support is much better in a later kernel than what is currently in unraid. With that in mind, I wouldn't expect much of a positive result until unraid's next update that includes the new kernel. I'm sure limetech has internal builds that they are testing, but don't expect to hear about them.

     

    Realistically, I'd say wait for the next round of public unraid beta's before even contemplating a ryzen build unless you are a willing guinea pig. 

    • Upvote 1
  17. 7 hours ago, johnvid said:

    Situation:

    When in a powerfailure and the UPS is also empty everything goes down. But when the power comes back 2 machines are set to start automatic. 

    Set the unraid to not automatic start. That way you can confirm everything is ok before you start all the services. It's a very bad idea to run the UPS to empty and immediately start everything back up when power returns. Many times the power will go back out again for a little bit after the first time it comes up, and if the UPS batteries are already drained, there will be no way for it to successfully shut down again, plus it's very hard on the UPS batteries to fully drain them.

     

    There are only 2 likely scenarios I can think of, either the power goes out so rarely that waiting for your servers to come back on line isn't going to happen hardly ever, or the power goes out so frequently that you really need to rethink the use of just a battery backup, possibly an inverter with a bank of batteries to allow you to ride them out.

     

    Either way, automatic restart after a power outage is a bad idea, you need to evaluate each individual situation before you get everything running again.

×
×
  • Create New...