Jump to content

JonathanM

Moderators
  • Posts

    16,680
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Normally you just let the requesting app pick it up and do the move. Are you manually sending nzb's to it and wanting them to be categorized?
  2. I just installed it myself quite easily, don't yet know if it's going to make a difference with PIA though. I don't think you can install it with the webgui, but with the remote GTKUI client I just installed the python 2.7 version and it seems to work.
  3. Open the GUI and click on the shares tab. In the user share section there should be a disk3 share. Click on the corresponding "compute" link on the size column. It will take a few seconds, and should return a list of physical disks that the share resides on. Once you get that information, open mc and navigate the left panel to the /mnt/diskX that corresponds to the physical disk shown by the compute command so that the contents of the disk3 folder shows, and navigate the right panel to the same spot, but only to where only the disk3 folder itself is shown. Using the tab key, the arrow keys and the insert key, highlight the contents of the disk3 folder in the left panel, and press F6 to move the highlighted folders to /mnt/diskX in the right. After this is complete, you should be able to go back to the web GUI, click on the disk3 share in the user share list, and delete it on the share properties page.
  4. No, I'm saying the location of the individual files doesn't matter. Parity doesn't track changes per file, only per disk address. Parity has no concept of files or formats.
  5. That's still the case, but instead of calculating a parity bit for a serial sequence of 8 bits along the drive, think instead of adding up all of the bits at a specific address on all the drives, and putting the resulting parity bit at that same address on the parity drive. That way you can always recreate any single bit on a specific drive by using all the other drives plus the parity drive. The math is still the same as it ever was for parity calculation, just with an arbitrary number of drives in a column instead of a specific number of bits in a row.
  6. The preference to use /mnt/user (shares only) instead of /mnt/diskX (individual disks) is an ongoing thing since the "user share copy bug" became a better known issue. If you know how unraid generates user shares and understand why copying between /mnt/diskX/share to /mnt/user/share can cause data loss, it's easy to avoid. For less savvy users, it's easier just to tell them to ignore the individual disks and let unraid work its magic with user shares. As far as what you may have missed in the time between 4.7 and 6.3.2, there is no way for me to know what you have and haven't learned so far. So much has changed with the addition of notifications, dockers and VM's. One major point is that plugins are less and less supported for applications, pretty much if you can do the task with a docker, you shouldn't be using a plugin.
  7. This, exactly. The safety of your data and hardware depends on clean power. It makes no sense to put a marginal power supply in a rig destined to hold your life's memories.
  8. If you don't care which numbered slot your data (shares) are on, you can forego the new config and just round robin from largest to smallest drive. Start by copying and verifying the content of your largest drive to any of the other array drives by whatever method you want. When you are sure all the data on the largest drive is duplicated properly, stop the array, change the drive format from RFS to XFS, then start the array and format the drive. From then on it's just a matter of copying all the data from the next largest RFS drive to the newly emptied XFS drive, lather rinse repeat until done. No need for new configs, preclearing, anything. If you feel a need to change slot numbers after you are done, you can do a new config once and put the drives where you want them. If you have defined drive slot inclusions or exclusions for your shares, those will need to be updated after you are done, as well as any legacy stuff that references /mnt/diskX instead of properly referring to /mnt/user locations. The only reason the long convoluted process is there is to preserve the disk slot number allocation for that specific data. Note. I do NOT recommend moving the data from the RFS to the XFS disk, because a move involves a delete cycle after the copy. It will take WAY longer to do it that way, as RFS can take AGES to delete data. Much faster to simply copy the data, then format the RFS disk when you are sure the copy is complete (verification recommended).
  9. This specific plugin does not display the grey/green "Done" button at the bottom of the popup window after it updates. Doesn't seem to effect anything, and all my other plugin updates have the done button, so maybe it's a small syntax thing? I don't think it's my machine, it does it on 2 different servers.
  10. Respectfully, that's what the min free space setting is for. As long as you use a share that is set to cache:prefer for stuff that you want to live on the cache drive and cache:yes for stuff that will be moved to the array, and have a proper setting for min free space, you will never run into this specific issue.
  11. Oh, that makes sense. I (wrongly) assumed you were trying to fill a need vs. just educating yourself. Learning new skills is always good, go forth and conquer!
  12. Congrats! Now you can simplify all your docker mappings, for example, downloaders like nzbget or deluge only need access to their configs in appdata and your download location, wherever that is. fetchers and categorizers like sonarr and radarr only need access to their configs, the download location, and their destination folder trees. Player apps like emby and plex only need their configs and media destinations. The simpler you can make it, the easier it is to maintain and troubleshoot. Keep everything referencing /mnt/user locations if possible, never reference specific disks like /mnt/disk2 or /mnt/cache unless the container author mandates it for some reason.
  13. I'm unclear on exactly what you are doing, but I'll make an assumption and say that yes, you probably want to let radarr do the final moving and renaming, especially since you can tell it to clean up the download and only move the files you want instead of all the scene stuff that tends to come along for the ride. Personally, my nzb/torrent apps don't have access to the final destination path, they only drop off the bundle for further processing by sonarr/radarr/cp/etc.
  14. Ok, so you see what I am talking about though, right? /downloads =! /data. You must match BOTH the host and container paths in the two containers, so they are pointing to the same file both inside the container and outside. I assumed since you had /downloads mapped to MediaDL, that was where you wanted the downloads to live. For your data download location, pick a container path and an array location for your downloads, and use the same path for both for all your media containers. Either downloads or data or whatever, doesn't matter, just pick one and use it for everything. Don't map folders you don't need, and don't map subfolders for different apps. If you want to specify subfolders, do it inside the app itself, referencing the common path. For example, if you decide to map /downloads to /mnt/user/appdata/downloads (what I use) then you can have the nzb downloader put stuff in /downloads/nzb/incomplete and move it to /downloads/nzb/complete/tv, which shows up in the array as /mnt/user/appdata/downloads/nzb/complete/tv, but to all your media apps it's still /downloads/nzb/complete/tv Container mapping is one of those things that is extremely difficult to wrap your head around until suddenly it clicks and then it's easy.
  15. No it's not. Edit host path 2 for radarr and change the host path from cache to user, and container path to /MediaDL instead of /downloads Then it will be setup the exact same way.
  16. Which atapi driver are you using? I vaguely remember about 10 different combinations of files that were used in config.sys and autoexec.bat to properly get a specific brand of cd drive to operate. There towards the end, when win95 was current, there was a "universal" driver released that worked with pretty much any hardware.
  17. Two options. Wait until tomorrow, as long as your server is left on overnight it will be updated, or run just the docker exec EmbyServer update part of the command at the console prompt if you don't want to wait.
  18. https://forums.lime-technology.com/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  19. If it was part of the parity protected array, it was limited by the speed of the parity drive, and couldn't be properly trimmed. The normal solution to this is to assign it to the cache slot, that way you maintain full share control and participation. Using UD to mount it will work to some extent, but that's not really the primary reason UD was developed.
  20. I have not, I wanted to keep it neat and keep it under my "download" folder. I take that to mean you removed the default mapping of /data, and added one for /download on the "app" side of the docker template volume mappings?
  21. Boot in safe mode, assign the disks, start the array, stop the array, reboot, see what happens.
  22. Depends on the hashing algorithm. If disk I/O and access time is only 1/20 of the total time required to hash and record the results, it would be faster to assign multiple cores to the same disk. It's all about processor math vs I/O speed. Granted, the current profile of hashing algorithms, disk I/O and processor speed is suited to 1 disk per thread, that's not an absolute.
×
×
  • Create New...