Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. That is usually how it is done, and in fact, was the reason I said this:
  2. You should post on the thread for that plugin and link to here so the plugin author can take a look.
  3. OK. Looking at your post history, I see you already did this to yourself before: Try the fix in that thread again. Then if we get user shares going we can work on your ridiculously large docker image.
  4. May 1 22:55:32 MeJoServerTv emhttpd: shcmd (94): mkdir -p /mnt/cache May 1 22:55:32 MeJoServerTv emhttpd: shcmd (95): mount -t xfs -o noatime,nodiratime /dev/sdb1 /mnt/cache May 1 22:55:32 MeJoServerTv kernel: XFS (sdb1): Mounting V5 Filesystem May 1 22:55:33 MeJoServerTv kernel: XFS (sdb1): Ending clean mount May 1 22:55:33 MeJoServerTv emhttpd: shcmd (96): sync May 1 22:55:33 MeJoServerTv emhttpd: shcmd (97): mkdir /mnt/user0 May 1 22:55:33 MeJoServerTv emhttpd: shcmd (98): /usr/local/sbin/shfs /mnt/user0 -disks 30 -o noatime,allow_other |& logger May 1 22:55:33 MeJoServerTv shfs: use_ino: 1 May 1 22:55:33 MeJoServerTv shfs: direct_io: 0 May 1 22:55:33 MeJoServerTv emhttpd: shcmd (99): mkdir /mnt/user May 1 22:55:33 MeJoServerTv emhttpd: shcmd (100): /usr/local/sbin/shfs /mnt/user -disks 31 2048000000 -o noatime,allow_other -o remember=root |& logger May 1 22:55:33 MeJoServerTv shfs: fuse: invalid parameter in option `remember=root' May 1 22:55:33 MeJoServerTv shfs: fuse_main exit: 3 May 1 22:55:33 MeJoServerTv emhttpd: shcmd (100): exit status: 3 /mmt/cache mounts, then /mnt/user0 (user shares excluding cache), but /mnt/user doesn't. I've never noticed that 'remember=root' invalid parameter before. Looking at syslogs from other working systems, that 'remember' parameter looks like it is supposed to be a number, so I don't know what that is about. But, I suspect your cache is corrupt in some way even though it is mounting. Then it can't mount libvirt or docker img because user shares don't mount. And I am pretty sure you have corrupted docker image even though it doesn't try to mount it, because you have it set to 200G. Only 20G should be more than enough and if it fills it is because you have one or more applications writing to a path that isn't mapped. Making docker image larger will not fix a filling docker image, it will just make it take longer to fill. See if you can go to Settings - Docker and disable dockers and go to Settings - VM Manager and disable VMs. Then reboot and post new diagnostics. Since your cache is XFS maybe we can see if repairing its filesystem gets us anywhere. Don't even try to enable dockers and VMs again until everything else is fixed, then we can set that docker image to 20G and try to figure out what you did wrong.
  5. Don't cache the initial data load as already mentioned, but you should install cache before enabling dockers and VMs so those will get created on cache where they belong. If you allow them to be created on the array there will be a little work to do getting them moved to cache later.
  6. You can't do that because the disk would already have data on it and reassigning it as parity would overwrite that data. Yes Each disk in Unraid is an independent filesystem, and each file exists completely on a single disk. Folders can span disks. This is what user shares are about. Parity is basically the same concept wherever it is used in computers and communications, even with RAID5 or other RAID or Unraid. Parity is just an extra bit that allows a missing bit to be calculated from all the other bits. Parity PLUS ALL remaining disks allows the data for a failed disk to be calculated. That data can even be accessed from that calculation, for reading or writing, even before the disk is rebuilt.
  7. To change that, you have to go to Settings - Docker, disable dockers, then it will let you delete the docker image. Then you can change the size and enabling dockers again will recreate it. After that you can reinstall all your dockers exactly as they were using the Previous Apps feature on the Apps page. Or you can leave it if you don't mind wasting that space. As long as usage isn't growing (as shown on the Dashboard) it isn't really a problem. The reason I even check on that in the diagnostics is because people often have an application misconfigured so it is writing to a path that isn't mapped, and so it writes data into the docker image and fills it up then their dockers stop working. They see their docker image is getting full so they think they can fix the problem by increasing its size, but all that does is make it take longer to fill and they need to fix the paths setup in the application instead. Wait until the rebuild is over to make any changes.
  8. That looks good. Pretty common usage. We discussed that very thing a couple of weeks ago here: https://forums.unraid.net/topic/91279-solved-aneely-docker-image-filling-up/?do=findComment&comment=846968
  9. Stop the array. Unassign the disabled disk. Start the array with the disabled disk unassigned. Stop the array. Reassign the disk. Start the array to begin rebuild.
  10. Those diagnostics look OK. I noticed you have 40G allocated for docker image. 20G should be more than enough. Have you had problems filling it? We can discuss that later. Rebuilding requires reading all the disks simultaneously to calculate the data for the rebuild, then writing that data to the rebuilding disk. It is OK to keep using the disks for other things and won't cause any data loss, but if other things are competing for access to the disks, then the rebuild will be slower, and those other things will be slower also. Rebuilding 6TB will take many hours though. Similar to a parity check. I sometimes do a little with my system during parity checks but avoid large reads and writes. The safest approach is to rebuild to a new disk. This allows you to keep the original disk as it was in case there are problems with the rebuild. But many people rebuild to the same disk (I have) and since there doesn't seem to be any problems with any of the disks or the filesystems it is probably OK to just rebuild to the same disk if you don't have a spare. Do you want to rebuild to the same disk or a new disk?
  11. This is expected since the error count resets on reboot. I assume you mean they are shown in the array and not in Unassigned now. Mounted is a different concept, it means the filesystem on the disk was able to mount and the files are accessible. Parity cannot actually mount since it has no filesystem. If disk1 is actually mounted that is a good sign since it means there is no corruption on the emulated disk. The physical disk1 isn't actually used since it is disabled, but the disk is emulated by calculating its data from parity plus all remaining disks. And it will be disabled until it is rebuilt. When a write to a disk fails, Unraid updates parity anyway so that failed write and any subsequent writes can be recovered, but now that disk is out-of-sync and has to be rebuilt from parity. Syslog is in RAM, like the rest of the OS, and so no log from before reboot is there anymore. Good. Probably make more sense to just do the rebuild instead of the extended test. Rebuild is needed anyway, will be a good test, and extended test will take a long time and rebuild still has to be done. Also, until rebuild is done, you have no protection since you already have a disabled disk. Let me take a look at those diagnostics and we can discuss how to rebuild.
  12. Post new diagnostics. Diagnostics includes syslog since reboot, SMART for all attached disks, and a lot of other information that gives a more complete understanding of the total situation. We always prefer the complete diagnostics zip file instead of anything else unless we ask for it.
  13. I am guessing this has something to do with the Docker Folders plugin.
  14. If it still hasn't shutdown I guess the power button is the only choice.
  15. Definitely before since syslog went back about a week. But then a parity check started at 3am this morning (scheduled I assume) and disk1 and parity both starting giving read errors. Was your previous parity check completely clean?
  16. OK, so you were able to get diagnostics without rebooting. That is good and does provide more information than we would have gotten after reboot. You should still
  17. And that SMART report you attached has nothing in it which probably means the disk has disconnected. Since you need to check connections anyway, and you may not even be able to get diagnostics in your current state, just go ahead and shutdown, check all connections, power and SATA, both ends including any power splitters. Then boot up and get those diagnostics for us.
  18. Sorry, I thought you had attached the complete diagnostics instead of only the SMART for a single drive. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  19. On mobile now so can't look at Diagnostics. Your data is almost certainly OK unless you do something wrong. Don't do anything without further advice.
  20. This ^ Parity has no filesystem so no format. Just assign and let parity build. Parity doesn't have any data. Parity is basically the same concept wherever it is used in computers and communications. Parity is just an extra bit that allows a missing bit to be calculated from all the other bits. To rebuild a disk requires parity PLUS ALL the other disks.
  21. 2+3 Seems to me you should already be able to do these. Not sure how Unraid would be involved for those. 4+7 VMs that can run all that software could be hosted on Unraid. 6 Do you mean the movies would be stored on an "Android Box" and streamed FROM that storage? Or did you really mean the movies would be stored on Unraid and streamed TO the Android box? 1,5,8,9 Very typical uses for Unraid, as is torrents and other downloading applications as dockers hosted on Unraid.
  22. Copy the folders and files from the emulated disk somewhere off the array. Note that the top level folder(s) of that emulated disk are part of user shares with the same name as the folder. Your copy must preserve the same folder structure as was on that emulated disk so you will be able to put them back in the correct user shares. Sounds like you are done with that part. You do a New Config without that disk so you can rebuild parity and get the rest of the array protected again. As soon as you start the array with a New Config, and before parity is rebuilt, the disk is no longer emulated. Any folders and files that were on the emulated disk are no longer in the array and so no longer in any user share. But the other assigned disks still contain their own folders and files, and their top level folders are still part of user shares. When copying the data back, you can use Krusader or rsync or whatever you want. If you take that copy you made, and you copy those top level folders back to assigned disk(s), then those top level folders will be merged if those same folders already exist on the assigned disk(s), or else they will be created on the assigned disk(s) if they don't already exist. That is how you would do it if you wanted to put all of it on the assigned disk that isn't being used. That disk would then have those parts of those user shares that were on the emulated disk. If you take that copy you made, and you copy the contents of each of those top level folders to a user share by that same name, then those contents will be put in that user share and Unraid will decide which assigned disk to actually write them to according to the settings for that user share.
  23. Not sure I understand, or maybe you don't. Any changes you make directly to an OS file will not survive reboot, because the OS is in RAM. When you reboot, the archives on flash, which are exactly as you originally downloaded them, are unpacked fresh into RAM. So any changes you made directly to any OS files before you reboot will not persist after reboot, since they are not part of those original archives which are unpacked into RAM. If you need some changes like this, you have to reapply them each boot.
  24. When copying from an Unassigned Device, it is up to you and how it serves your purposes whether you copy to disks in the parity array, or you copy to user shares. The user shares are simply the aggregate of all top level folders on cache and array. If you create a user share in the webUI, Unraid creates a top level folder named for the share on cache or array as needed in accordance with the settings for that user share. If you create a top level folder on cache or array disk, that top level folder is automatically a user share with the same name as the folder, and it will have default settings until you change them. If you copy from an Unassigned Device to a user share, the data will wind up on cache or array disks as determined by the settings for that user share. If you copy from an Unassigned Device to a path within a top level folder on an assigned disk (cache or array), then the data will wind up on that disk, but it will still be part of the user share named for that top level folder. Note that you must not mix user shares and assigned disks when moving / copying files. This is because Linux doesn't understand that the user shares and the disks are the same files, so it can try to overwrite what it is trying to read if the source and destination paths work out that way. But since Unassigned Devices are not part of the user shares, you can move/copy them with user shares or with assigned disks since there is no chance the paths will collide.
×
×
  • Create New...