Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. You have completely filled cache. The shares you want to stay on cache (appdata, domains, system) you have set to be moved to the array (cache-yes). It's probably just as well for now and we can work on getting them moved back to cache after there is room. Go to Settings - Docker and disable and do the same for Settings - VM Manager. The shares that have files on cache that need to be moved to the array are set to be ignored by mover (cache-no). Set those to cache-yes. Then run mover. After it finishes, post new diagnostics.
  2. Probably. Another way would be to manually delete the Data folder from disk1, then stop and start the array. Looks like Data has default settings so I'm not sure it has a .cfg file in config/shares on flash. You should also clean that up, probably by deleting both .cfg files if they exist. Then the only share that will still exist is the data share, and it will have default settings until you change them, with no .cfg files leftover to confuse things in the future. It does bring up the question of how these got created in the first place though. Did you create that Data share in the webUI? Or did something else create it? Using the wrong upper/lower case when specifying a path will often do this. This is very common when someone is setting up their dockers.
  3. Click on your Krusader docker icon and select Edit. You would add another path.
  4. https://forums.unraid.net/topic/89824-683-user-shares-not-showing-in-gui In particular, this post further down in the thread: https://forums.unraid.net/topic/89824-683-user-shares-not-showing-in-gui/?do=findComment&comment=833730
  5. You're having problems connecting on all 3 disks. Check connections, power and SATA, both ends, including any power splitters. Was this working before? If so, did you change anything or mess with the hardware in any way?
  6. Nothing obvious in those, and they seem to say you have shares with files in them. What happens if you boot in SAFE mode?
  7. Are you sure something else doesn't have the same IP? I always use DHCP (the default) and then reserve IP by MAC address at my router, so everything is managed from one place.
  8. All top level folders on cache or array disk are automatically user shares, whether or not you have explicitly created them. You are going to have problems with 2 top level folders that have the same name except for upper/lower case. Which should it be, Data or data? Do both have files in them? Are both on both disks?
  9. Are you trying to access it by name or by IP? Are all of these devices on the same subnet?
  10. And make sure that mapping is read/write slave.
  11. You don't want them moved to the array. You want them to stay on cache so they will perform better and so they won't keep array disks spinning. You can have multiple disks in the cache pool for redundancy, raid1 mirror is the default. You can also backup some of the things that stay on cache with the CA Backup plugin. You can run totally without cache if you want, and arguably it would be simpler. But there are definite advantages to having cache as mentioned. Some people have set things up where they were trying to cache everything they wrote to the server, because it would be faster. But if you start that way and have a large amount of data to write at the beginning, such as transferring all files from another system (initial data load), then of course you will fill cache because it won't have enough capacity, and mover will just get in the way if you try to make it move more often because it will be competing for the same disks you are writing. Mover is intended for idle time. There is simply no way to move from cache to the slower array as fast as you can write to the faster cache. All writes to any user share with cache-prefer or cache-yes will go to cache. And mover will ignore cache-no or cache-only user shares, so just setting a share to cache-no won't help get things off cache that are already there. If you really have some good reason to make a vdisk that large then maybe you would have to make domains not cached. But I think it is pretty rare to make a vdisk that large. Your VMs can access your Unraid storage for general file storage and the vdisk would normally just be for the VM OS. I think you have missed most of my point. VMs and dockers can access whatever they need to on the array or cache. It is the appdata, vdisks (domains), and images (system) that I was talking about keeping on cache since those files will always be open regardless of what the dockers and VMs are doing. I think some of your slowness is likely the result of your disk1 problems, since disk1 is the default target until it gets to the highwater mark. I actually don't cache much since most of the writes to my server are from scheduled backups and queued downloads, so I am not waiting for them to complete anyway. Those all go to cache-no user shares so they are written directly to the array where they are already protected and don't need to be moved. But, I still use cache for the reasons already discussed, for these shares: appdata - docker working storage; for example, the plex database (but not the media files themselves, which are on other user shares), domains - the VM OS (but for general storage just go to the Unraid user shares), and system shares - the libvirt image and the docker image (where the container executable code lives) Maybe the same disk1 problems that were in your syslog when you were doing the dual parity correction. Plus the overhead of writing to the parity array, though it should be probably be faster than that unless you are moving a lot of small files. What dockers were you running before? Plex appdata is notorious for having a lot of small files. You might try stopping mover: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=554749 and then fix your other problems first. After you get disk1 sorted out, you must run another correcting parity check. Exactly zero parity errors is the only acceptable result.
  12. The limetech dockers have been deprecated for some time now. And you shouldn't need to put any repositories in at all. Install Community Applications plugin, and use it to install anything else.
  13. No, parity doesn't know or care about xfs, it is all just bits. Possibly whatever caused the xfs corruption (bad connection, etc) was causing problems for the rebuild.
  14. Parity doesn't have a filesystem so encryption doesn't apply to parity. Unraid IS NOT RAID isn't necessarily a problem, it is just another way of doing things, which has tradeoffs. There is no striping in Unraid. Each data disk in the parity array is an independent filesystem which can be read all by itself on any Linux. Folders can span disks (user shares), but each file is completely contained on a single disk. Read speed for any file is at the speed of the single disk containing the file. Unraid allows 1 or 2 parity disks, which provide redundancy to recover 1 or 2 simultaneous disk failures. Parity is realtime, so writes to the parity array are somewhat slower than single disk speed since parity also has to be updated. There are 2 different methods for updating parity, each has its own tradeoffs. And, since each disk in the array is independent, if for some reason you lose more than parity can recover, you will still have whatever disks can still be read. Because each data disk in the parity array is independent, you can mix different sized disks in the array, and easily replace or add additional disks without rebuilding the whole array. Unraid also provides for faster storage in the cache pool, and various btrfs raid configurations are supported. These are usually SSDs where files that need performance will be stored (dockers and VMs), and where writes to the user shares will be temporarily saved until moved to the slower parity array.
  15. Is this a move or a copy? Moving of course is also writing the source disk to delete the files.
  16. Ultimately, you want appdata, domains, and system shares on cache and staying on cache so your dockers and VMs performance won't be affected by the slower parity writes, and so they won't keep array disks spinning. And you have plenty of capacity on cache for that and caching some user share writes if you want. None of your shares are prefer. Looks like the only ones that have contents on cache are Yes, and that is good for now. You will make appdata, domains, and system prefer eventually so they will get moved to cache. Cache has plenty of space now, no telling how you were filling it before since you have changed your settings. Were you trying to cache the initial data load? Mover is intended for idle time, so if you were trying to write to the array and move at the same time then mover would have been competing with other things for access to the disks. Maybe you were even doing a parity check or something like that that would have slowed things down. Enough about cache and mover for now though. Your syslog says you were trying to correct dual parity, and getting read errors on disk1. Shut down, check all connections, power and SATA, both ends, including any power splitters. Then start back up and do an extended SMART test on disk1.
  17. Have you seen this FAQ? https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  18. Why not just use user shares the way they are intended to be used? If you need different access for different files put them in different user shares.
  19. Or any of the 4TB disks could be parity if you don't want to use the 10TB in the array
  20. It's not clear whether or not you understood that Unraid IS NOT RAID. Assuming you intend to have parity, the 10TB disk would have to be used for that, but that could be done after you get the data where you want it.
  21. In case it's not obvious, stop the rebuild, it isn't working. You will have to try again after fixing the connections and posting diagnostics.
  22. Check all connections, power and SATA, both ends, including any power splitters. Then post new diagnostics.
×
×
  • Create New...