Jump to content

trurl

Moderators
  • Posts

    44,363
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Are you running this on Unraid? Some of the host paths you are using don't seem to be paths that would be used on Unraid.
  2. For future reference, rebuilding to the same disk is simple Stop array Unassign disk to be rebuilt Start array with disk unassigned Stop array Reassign disk to be rebuilt Start array to begin rebuild Not sure what happened to your filesystem. Perhaps if you had asked for help sooner someone would have had some better idea how to recover. I have seen mention that btrfs recovery tools are perhaps not as well developed as XFS, for example. I personally don't use btrfs in the parity array, but I know @johnnie.black has worked with it. As mentioned, rebuild seldom fixes filesystem corruption. Since parity updates happen at a very low level, parity usually will be in sync with the low level bits, even if they are a corrupt filesystem. One thing you can try if you find yourself in a similar situation is to start the array with the problem disk unassigned. That will make Unraid emulate the disk from parity, and whatever the result of that emulation is will be the result of the rebuild. So if the emulated disk is still corrupt, rebuild will not help. Often in situations where people actually have a disabled disk, and the emulated disk is corrupt, we will have them repair the filesystem of the emulated disk before actually doing the rebuild. I would have asked for diagnostics at the very beginning so I might have had a clearer idea about your configuration and situation, but it seemed like it was maybe too late for any advice I might have provided.
  3. What do you mean by the data that is emulated? If you have a missing or disabled data disk, and all other disks including parity are good and parity is in sync, then the data for the missing or disabled disk is emulated from the parity calculation by reading parity plus all other disks. If you have single parity then a single disk can be emualted, if dual parity then 2 disks can be emulated. But your use of the word emulated seems a bit more vague that all that. Each data disk in Unraid is an independent filesystem, so each individual disk can be read all by itself on any linux, including Unraid. Unless you think there is a problem with each and every one of your disks, then I don't see any point to backing up the whole array, and if there is a problem with each and every one of your disks, then backing them up may not be possible. You should have backups of anything important and irreplaceable, of course, but you indicated you did. Really everything you say you did seemed to be wrong or pointless. For example, as mentioned, when rebuilding a disk, the entire disk is going to be overwritten anyway. So preclearing the disk and / or formatting the disk actually is completely pointless. Maybe rebuilding was pointless too. If you had a corrupt filesystem, then rebuilding isn't going to help.
  4. First of all, plex and krusader are completely separate containers, completely isolated from each other. There is no reason for the paths in one to be similar to the paths in the other unless you have specifically made it that way. Your screenshots seem like they both may be about krusader, but you are asking a question about accessing files in plex. Totally unrelated. Here is the best way to clear this up so we at least have a complete idea of how you have plex configured and can give you the correct advise. Post the docker run command you have for plex as explained at this very first link in the Docker FAQ:
  5. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  6. And another thing I always say: Each additional drive is an additional point of failure.
  7. Use the command diagnostics and hopefully the diagnostics will get written to flash. Then you can shutdown put flash in your PC, get those diagnostics to post on the forum. Do you have a recent backup of your flash drive?
  8. When transferring files directly to specific disks, they still end up in user shares. The user shares are simply the aggregate of the top level folders on cache and array. The top level folder name is the same as the share name. So if you have a Media share, for example, you could have several disks each with a top level folder named Media, and all the files in all those Media folders are part of the Media share. Doesn't matter whether you transferred those files to the Media share, or you transferred them to the top level folder named Media on a particular disk, still part of the Media share.
  9. Minimum Free is important regardless of what Allocation Method you use. Its importance is just more obvious when using Fill-Up. Unraid has no way to know how large a file will become when it chooses a disk for it. You should set Minimum Free to larger than the largest file you expect to write to the share. If a disk has less than minimum, Unraid will choose another when it is choosing a disk for a new file. But this isn't really what I was talking about with Krusader, rsync, etc. In those cases, the destination folders for the files are created in advance, and then the files are written to those folders. This is really about the way those applications work. As far as Krusader, etc. knows, the destination user share is all one space and it will try to put everything on that space, starting with precreating the destination folders and won't worry about running out of space until it does. But since those precreated empty folders don't take much space, they all get created on one disk. Here is the way I handled this when I was I was using rsync to load my backup server. I saw that the folders had already been created all on one disk, and I knew all the files wouldn't fit on that one disk. I also saw that it was filling those folders in alphabetical order. I just started at the other end of the alphabet and began moving empty folders to other disks so that the files for those folders got written to those other disks. Another way would be to transfer the data in smaller amounts, so that on later transfers the folders would get created on other disks. Or you could just transfer directly to specific disks, but you would still have to be aware of how much a single disk could hold and not start a transfer that would overfill a disk.
  10. Wish you had asked for advice earlier, probably would have been better if you had asked before doing anything at all. It's not entirely clear from your description what you had or what you did. A mistake some make is formatting a disk in the array, then expecting parity to rebuild the data that was there before the format. Since parity is updated by the format (and any other write operation in the array) rebuild will just result in a formatted disk. But that isn't what you say you did. Formatting or clearing a disk that is not in the array will not affect parity, but it is pointless since rebuild is just going to completely overwrite it anyway. Rebuilding in that situation would simply result in the same data that was on the disk before. Probably you already had a corrupt filesystem, and so a corrupt filesystem is the expected result of the rebuild. On the other hand, you don't explicitly mention rebuilding anything (except in the title). Instead you say Did you ever do New Config during any of this? Since you I don't really have anything else to add unless you have something more to add. Maybe UFS Explorer will help, I've never used it.
  11. There are many using Unraid with Ryzen. Don't know if anything here will help you or not: Trying other ports, cables, might help.
  12. Krusader, rsync, and many other things begin the transfer by creating the empty destination folders first. Those will likely all go to one disk since those empty folders don't take much space. Then the actual files are copied to those already created folders, with the result that they all try to go to the disk the empty folders were created on. Also, this is not correct: Filling one disk before moving to the next is the Fill Up allocation method. The default (for good reasons) High Water allocation method is more complicated. It is a compromise between spreading the files to other disks "eventually" while not constantly switching between disks as might happen when using All of this is explained in the already linked wiki:
  13. Maybe try specifying the full path to the plugin command, which you can get with which plugin
  14. Can you ping anything on the internet from your server command line?
  15. This is what I have in both of my servers, the USB2 version. USB2 is all that's needed in this application, USB2 ports work more reliably in this application, and USB3 in this small form factor is likely to overheat in my experience.
  16. Instead of syslog, please post diagnostics. Diagnostics includes syslog, SMART for all attached disks, and many other things that give a more complete understanding of your configuration and situation. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  17. I don't use this but I just loaded the template and it looks like it is supposed to run in bridge mode. And of course you can't run multiple things on the same port so you will have to remap one or the other.
  18. /mnt/cache/system is part of /mnt/user/system. The user shares are simply the aggregate of the top level folders on cache and array. The top level folder on cache named system is part of the user share named system. And it doesn't look like you have any files on any shares. Can your server reach the internet? Jul 31 00:42:18 Shark-Dive shfs: error: get_limetech_time, 254: Connection timed out (110): -2 (7) Jul 31 00:42:18 Shark-Dive shfs: error: main, 3391: Connection timed out (110): no devices You can't start with a trial key without "phone home".
  19. Changed Status to Closed Changed Priority to Other
  20. If system was already on cache just changing the settings won't move it. You will have to go to Settings - Dockers and disable them so you can move system yourself. If you want more advice post your Diagnostics. This thread will probably get moved has been copied to General Support since it doesn't seem to be a bug.
  21. If system was already on cache just changing the settings won't move it. You will have to go to Settings - Dockers and disable them so you can move system yourself. If you want more advice post your Diagnostics. This thread has been copied to General Support since it doesn't seem to be a bug.
  22. The main thing about moving those to cache. You can't move open files. You have to disable Dockers and VMs (the services not just the individual dockers/VMs) then you can move them, or mover will move them to cache for you since those shares are cache-prefer. When you get ready to add cache post new diagnostics and we can work through all that.
  23. Those diagnostics are without the array started so there are some things they don't tell us. What I can see is that you have already enabled dockers and VMs, and you don't have cache installed yet. Do you plan to have cache? Dockers and VMs will not perform as well if you have them on the parity array, and they will keep array disks spinning since they will keep some files open. Since you have already enabled dockers and VMs, they already have created some files on the parity array and it will take a little work to get them on cache if that is what you intend.
×
×
  • Create New...