Jump to content

itimpi

Moderators
  • Posts

    20,778
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Sorry - a copy paste error - now should be correct.
  2. The process is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  3. I agree - anything other than 0 on a new drive is a cause for concern. Possible the drive got damaged in transit. That should not cause reallocated sectors as they happen completely internally to the drive. I guess it might be possible for a power cabling issue to cause this but I would be very surprised if that was actually the cause.
  4. I was interested in seeing the docker settings so I could see what volume level mappings you had in case any of them looked relevant. Could not spot anything obvious in the diagnostics, except that if the free space on the cache dropped below 90 GB the Minimum free space setting on several of the shares would kick in and then the files would start by-passing the cache. However at the moment the free space on the ‘cache’ pool is more than this so it should not currently be relevant. Is it any specific shares that you think are exhibiting this behaviour? BTW: It is also a good idea to set a Minimum Free Space setting on the cache pool as well as btrfs systems tend to start mis-behaving if they get too full. you have several .cfg files in the config/shares folder for shares that it looks like no longer exist. A good idea to delete any like that so the diagnostics are a little clearer on what share you are actually using.
  5. I do not use NextCloud myself so I wonder if it is a setting within NextCloud itself that is stopping you rather than something at the Unraid level? It might be worth asking NextCloud support if this could be the case.
  6. I cannot spot anything obviously wrong Just for interest how are you currently trying to delete the files when not using Krusader just in case that is relevant in some way. For instance is it across the network or using something like Dynamix File Manager?
  7. You keep getting sequences like: May 15 06:05:37 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close unmatched open for MID:828429582 May 15 06:06:08 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close unmatched open for MID:828549924 May 15 06:06:39 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close unmatched open for MID:828670230 May 15 06:07:10 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close unmatched open for MID:828790526 May 15 06:07:42 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close unmatched open for MID:828910946 May 15 06:08:13 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close interrupted close May 15 06:08:13 Unraid kernel: CIFS: VFS: \\192.168.1.194\STUDIO Close cancelled mid failed rc:-9 which is not normal. What is the UD device mounted as 'STUDIO' being used for?
  8. Those look OK, What about the permissions on the 000 folder?
  9. I ama little confused by your description as the cache IS a pool - do you really mean it is writing directly to the array? It might help if you uploaded your system's diagnostics zip file to your next post in this thread so we can see something about how your system is set up. It might also help if you posted a screenshot of your FileZilla configuration.
  10. You are correct in that in the main Unraid array all the available space on none-parity drives is available. As to how parity works regardless of the number of drives in the array you might find this part of the online documentation clarifies things? It also might make it clearer why as the number of drives goes up you may be more inclined to have a second parity drive to protect against 2 simultaneous drive failures.
  11. It would probably help if you posted the output of a ls - l path-to-problem-folder command so that we can see exactly what permissions are on a problem folder, Probably also worth doing it one level up as well in case it is something on the parent folder.
  12. You said you did it on the first drive. Was not sure you realised you would need to do it on other drives you treat the same way. note that if you copy to the User Share ‘data’ instead of directly to a drive then Unraid will create the required folder automatically as required on other drives that are part of the share.
  13. If you want to move data to another drive in the same share then repeat that process there.
  14. You could create the target folder on the disks first, and then copy the data into them thus avoiding the move.
  15. The rename would work if you had done it immediately. You will not be able to rename at the moment as the ‘data’ folder already exists.
  16. Glad to hear that appears to have resolved the current issue it is always possible for an issue like this to come back. SATA connections in particular seem to be prone to slowly working loose over time (I guess due to vibration or perhaps thermal cycles).
  17. it looks like Dynamix is using a copy/delete strategy rather than a move. This is always the safest strategy, but I thought the latest version of the Dynamix `file Manager recognised when it could use move instead, but maybe I am wrong. You are right that if using a move it would be virtually instantaneous.
  18. you do not indicate to where on the disk you copied the data If you copied the files to a folder on the disk that had the name of the share you wanted it to be part off then that should have happened automatically. If not you can simply move the data into a folder with the name of the share and it will show up as part of that share
  19. If the folder exists on multiple disks then it is automatically a share of that name covering those disks. you have not shown anything that shows the folder for the ‘data’ share currently exists on any drive except disk3. If you were using Krusader to move the files then you also need to check that permissions are correct (if not use Tools->New Permissions) as files with the wrong permissions may not be visible over the network.
  20. You can set drive specific warning levels by clicking on the drive on the Main tab and providing values for that drive that are different to the global setting.
  21. You might want to consider mapping /transcode to somewhere on a SSD pool if you want to avoid this issue in the future? Slightly lower performance but safer.
  22. The only thing that should slow down the parity check is if a disk is having problems. Typically these will be connection issues relating to the SATA or power cabling that is causing retires to happen, but posting diagnostics taken when this problem has occurred will allow us to give a more informed view.
  23. You are getting errors like the following in your syslog: May 15 19:02:48 Tower kernel: ata7.00: status: { DRDY } May 15 19:02:48 Tower kernel: ata7: hard resetting link May 15 19:02:49 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:54 Tower kernel: ata7: found unknown device (class 0) May 15 19:02:54 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:58 Tower kernel: ata7: softreset failed (1st FIS failed) May 15 19:02:58 Tower kernel: ata7: hard resetting link May 15 19:02:59 Tower kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 320) May 15 19:02:59 Tower kernel: ata7.00: configured for UDMA/133 May 15 19:02:59 Tower kernel: ata7: EH complete This suggests connection issues which are typically cabling (sata or power) related.
  24. You should also enable the syslog server (probably with the mirror to flash option enabled) so you can get a log that survives the reboot. The standard diagnostics only include logs that are captured after the reboot.
×
×
  • Create New...