taalas

Members
  • Posts

    81
  • Joined

  • Last visited

Everything posted by taalas

  1. Thanks for your reply. So as I thought: manually run mover, check if all files are off the cache, then change?
  2. Hi! This might be kind of a trivial question: Since one if my cache drives is failing I would to to change the primary storage (cache) for some of my shares (i am on 6.12.1). Will this "just work" if i change it in share settings? Or should I make sure to manually run the mover, check that all files are on the array and then switch over?
  3. Hi itimpi, that's indeed what was happening. There was an empty folder called "fastappdata" on my cache pool named "cache". What confused me in the end was the wording of the warning, as it states: Share fastappdata set to use pool fastcache, but files / folders exist on the cache pool With "fastappdata" and "fastcache" in bold, but the second part just stating "cache pool" I was thinking about the "fastcache" cache pool, not the other "cache" pool...
  4. I crossposted to the Support Thread to be in the right spot for this question. Sorry again.
  5. Hi, crosstposting from General Support since I didn't realize this thread was here My Fix Common Problems plugin recently started reported a warning for a new cache pool I created and moved files to. The corresponding share is set to "Only: fastcache". Is there any way/log that tells me, which files the plugin is reporting? How do I proceed? Share fastappdata set to use pool fastcache, but files / folders exist on the cache pool Thanks!
  6. Thanks for your reply! I did, all the files are shown to be on "fastcache", hence my confusion. I might have gotten confused about the support thread, because Google led me to a thread for this plugin that was closed and I didn't see the new one. Should I then rather move this question there?
  7. Hi, I tried to post this to the Plugin Support forum but do not seem to be allowed to post there. My Fix Common Problems plugin recently started reported a warning for a new cache pool I created and moved files to. The corresponding share is set to "Only: fastcache". Is there any way/log that tells me, which files the plugin is reporting? How do I proceed? Share fastappdata set to use pool fastcache, but files / folders exist on the cache pool Thanks!
  8. Hi @JorgeB thanks for your support, very much appreciated! Just to make sure I am doing the right thing: I ordered an ASM1064 controller, which has arrived by now. To rebuild the disabled disk I would now do the following: - make a screenshot of my disk assignments - power down the server - replace the controller - start the server - check if all disk assignments are still correct (this should theoretically work although I use a different controller, correct)? - shutdown the array - remove disabled disk - startup the array - shutdown the array - reassign the disabled disk to the slot - startup the array - rebuild disk from parity Is this the correct order of steps? I am still a bit worried about the parity errors the check warned about (before aborting at about 27%). Do I assume these were just read errors because of the controller malfunction? The parity check was non-correcting. Thanks!
  9. This is the part of the syslog where the problems starting (during the scheduled parity check). When I checked this morning, the parity check was paused, disk 1 showed 1550 errors, disk5 showed 169 errors, parity check showed 169 errors (same as disk5). Since the /dev/sdj (my SSD cache) logged the same errors over and over again. I cut the log after a couple of those. Since restarting the array (and copying some files to another server for safety), no more errors were logged on any drive. This was different before the restart, errors on disk5 kept increasing, albeit slowly. Some question to clarify: - Since the array is working fine for now, should I stop using the system or can I read from the array and have the docker applications running? I stopped the mover process to reduce the workload though. - disk1 is almost empty, all data that currently is on the device does not matter. does that help in any way? - Should I wait for rebuilding/etc. until I have the replacement controller? - What should I expect in terms of data loss (since there were parity errors logged)? Sorry, just trying to get a clearer picture of the situation. syslog-127.0.0.1.log.zip
  10. Hi @JorgeB sorry, I have attached a diagnostic log. I have restarted the server though, since I couldn't access the SMART logs. I can provide the relevant part of my syslog form last night though if needed. It seems to me, that the server might have had problems with the SATA controller the drives are attached to (since in addition to the 2 drives I mentioned earlier, my SSD cache did not work properly). After restarting the server, everything seems fine (except for Disk1 still being disabled). Please let me know whether I can provide any more information. Thanks! spire-diagnostics-20220101-1146.zip
  11. Hi! Last night my monthly parity check started, this morning I found the server with errors: - the parity check is currently paused, since 2 disks have read errors, Disk1 and Disk5 - the array is up (I cannot read SMART values though, "A mandatory smart command failedexiting. To continue, add one or more '-T permissive' options.") - Disk1 is disabled I am not sure what the best way to proceed is right now. Any help would be greatly appreciated.
  12. Thanks @Squid, that sounds like a great idea. Am I right in assuming, that the name of the appdata share is not actually important (as in hardcoded somewhere)? I was afraid of breaking some functionality with using a different share as appdata, but hoped that it was just a share like any other. Also, could you elaborate if there is any up/downsite for using /mnt/fastcache/fastappdata instead of /mnt/user/fastappdata? I have read many different opinions about this...
  13. Hi, i am currently in the process of adding a second cache pool to my array, using an ssd drive. I have already moved my docker image to the new drive (it was at /mnt/cache/docker.img, is now at /mnt/fastcache/docker.img) while having the docker service disabled and this worked flawlessly. I would now like to move my appdata share from the old cache to the new cache. Situation currently: appdata share is set to Only:Cache all docker containers point to directories at /mnt/cache/appdata (this used to be a suggestion instead of /mnt/user/appdata) I would like to move appdata to /mnt/fastcache/appdata I wondered whether it would make sense to have to appdata shares (appdata and fastappdata) but decided against it, since most plugins etc. (e.g. appdata backup) seem to be built for one appdata share. There seem to be different options for moving (while disabling the docker service or stopping the containers): Use the mover (can I just switch from Only:Cache to Only:Fastcache or Prefer:Fastcache to move the files or do I have to put them on the array in an intermediate step moving the files manually on the command line (mv, rsync,...?) Any advice on what would be the best/safest way to proceed? Thanks!
  14. I am currently searching for a lan file sync solution and since this docker is completely self contained I would be very happy to give it a try. I am planning to have my data folder on a (dedicated) unRAID share. Will this prevent the disk(s) for this share to spin down...or will nextcloudpi only access the data dir if users are requesting files etc.?
  15. Thanks for your quick reply johnnie.black. I edited my initial post to better reflect what happened (I had started the server once between installing the new PSU). So, I should not worry about the parity errors (even if there seem to be alot of them), run a correcting parity check and then delete the duplicate files after? Do I delete them directly from /mnt/disk[n]/... or will this be a problem with parity? I do have drives that show read errors in their SMART logs (afaik the number did not change during this though), but no uncorrectable ones. Thanks for your advice
  16. Hi, I recently had my PSU fail in my unRAID server. I installed a new PSU and the array came up fine. Did a (non-correcting) parity check and it shows 3547 parity errors. The turn of events: - The server went down due to the PSU failing - I tried to find out whether it really was the PSU by using a different one without putting it into the case, the server booted fine, started a parity check which I aborted (doe to wanting to putting the PSU in the case). - I installed the new PSU - I started the server and started a new (non-correcting) parity check (which resulted in 3547 parity errors, but no read errors on the drives) The Main page shows no read errors on the drives (which happened the last time i had parity problems). I had recently invoked an unBALANCE moving process to empty a drive. This was running while the PSU died. Some of the files that were moved during this process now show as being on 2 disks at once (disk 3 and disk 6 in my case, they were to be moved from 3 to 6). Any advice on how to proceed in my case? Should I proceed deleting the duplicates on drive 6 (or drive 3)? What are the next steps? Thanks!
  17. This might be a dumb question, but just be safe: The plugin states that the mover should be disabled while doing an unBALANCE operation and that no process should write to the disk(s) (only source or targets as well?). I can disable the scheduled mover using mover tuning, and I am *quite* certain that no other processes write to the drive(s). What can theoretically go wrong if there are processes writing to the drive(s)?
  18. Hi! Extended tests show the same behaviour for me. Also seems to be stuck scanning /mnt/user/appdata. Some observations/questions: - The extended tests used to finish in about 4 minutes. Not sure why it behaves differently now. - The scan seems to be stuck on appdata which I excluded in the settings (I am also unsure why this is needed since the explanatory text states that the docker appdata folder is supposedly excluded by default?) - I see very high usage on the shfs process while the script is running Has anything changed with the script (since my data hasn't changed that much)?
  19. I am indeed talking about plugins (sorry, I hadn't checked whether the category also contained core utilities). I guess I just don't completely get why there is a User Utilities section at all, or is that originally reserved for plugin settings? I guess my misconception is that most plugins that register themselves to this category seem to be tools rather than settings...that might be just me though...
  20. Hi! One thing about UNRAID has confused me for a couple of years now and what's the harm in just asking: How come the User Utilities are located in Settings rather than Tools? I find myself clicking the wrong tab even after many years of using UNRAID. Is there any reason for this (in my case not one of the items in that category is a setting)? Is anyone else experiencing the same or am I the only one? Thanks!
  21. Thanks everyone for clearing this up. If I understand correctly, when adding a drive to the array unRAID will: - not clear the drive it is a parity drive or replacing an existing drive - clear the drive it is a new (additional) data drive You mean that it is only necessary in this case because unRAID would clear the drive anyway, making it possible to skip or rather do this beforehand? Since I am replacing a (faulty) drive, unRAID would not clear the drive when I add a new one. The only upside a pre-clear would bring would be stress-testing the new drive to avoid early failure?
  22. Yeah, I thought about that. If I only preclear using one pass (I have never done more than one), is there really any advantage for me in preclearing? Or should I just replace my faulty drive and let unRAID rebuild it without a prior preclear?
  23. Is there any way to downgrade this (or other plugins) if it was installed via Community Applications?
  24. Hm, I think I will have to give it a try. Really need these disks to be cleared since one has to replace a disk that is currently disabled. Hope for the best...