Jump to content

trurl

Moderators
  • Posts

    44,363
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Anything using the array disks during parity check or parity sync or data disk rebuild (basically all the large parity operations) will slow down these parity operations and also slow down those things using the disks, since they are competing for the same disks and since they will likely have to take turns seeking different sectors of the disks than the one the other process is accessing. It won't break anything though. I sometimes use the array briefly and occasionally during parity checks, but not for large transfers.
  2. I don't know about that software, but Windows does not natively support any of the filesystems used by Unraid. The software often recommended is UFS Explorer. You might try repairing the disks filesystem as an Unassigned Device in Unraid. https://wiki.unraid.net/Check_Disk_Filesystems#Drives_formatted_with_XFS Be sure to note this part in the Additional Comments of that wiki:
  3. If you were trying to demonstrate how many slots you allow for cache that screenshot doesn't do it.
  4. At this point my recommendation is to disable docker service (Settings-Docker) and quit using the server until you successfully rebuild parity (your array is currently unprotected). Then before enabling docker again we can work on this To rebuild to the same disk (whether parity or data) Stop array Unassign disabled disk Start array with disabled disk unassigned Stop array Reassign disabled disk Start array to begin rebuild
  5. Seems like a Flash problem. Go to Tools-Dagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  6. Don't hesitate to ask for help. What really makes me sick is someone asking for help too late and they have already screwed it up on their own.
  7. I really wish you had asked for help very early on. Since disk6 was disabled, it was being emulated by all the other disks from the parity calculation. But some of those disks were having connection issues. So it is possible disk6 wasn't really corrupt, but the emulation of disk6 was corrupted by the bad connections on the other disks. In any case, instead of formatting the disk, the correct thing to do would have been to repair its filesystem. Here is another recent thread that you may find educational:
  8. Did you ever format the disks in the array? You have to let Unraid format a disk before it will have a filesystem.
  9. Also, have you tried reducing the number of disks allowed in the pool to only one? (stop the array to see what I mean)
  10. What is the filesystem of cache on that other server?
  11. You apparently misread or misunderstood. This statement came after several others in that post and in this thread discussing Unraid User Shares. User Shares are what you share on the network. I don't recommend sharing the actual disks that Unraid uses for those User Shares.
  12. Laptop isn't really a good platform for Unraid anyway. What do you plan to do with Unraid?
  13. Accessing Unraid shares over the network isn't really any different than accessing shares on any other computer on the network. Have you ever worked with networked computers before? What exactly about the normal way you access Unraid shares over the network do you want an alternative to?
  14. Like the rest of the Unraid OS, the syslog is in RAM, so it starts over when you reboot. But there was enough after the reboot for me to see some things and answer some questions. ata1 is the connection to parity, ata2 is the connection to disk2 Aug 29 13:38:24 Tower kernel: ata1.00: ATA-9: WDC WD100EMAZ-00WJTA0, JEGL12UN, 83.H0A83, max UDMA/133 ... Aug 29 13:38:24 Tower kernel: ata2.00: ATA-9: WDC WD80EMAZ-00WJTA0, 7HKGB64F, 83.H0A83, max UDMA/133 ... Aug 29 13:38:33 Tower kernel: md: import disk0: (sdb) WDC_WD100EMAZ-00WJTA0_JEGL12UN size: 9766436812 ... Aug 29 13:38:33 Tower kernel: md: import disk2: (sdc) WDC_WD80EMAZ-00WJTA0_7HKGB64F size: 7814026532 (emulated) disk6 was unmountable Aug 29 13:38:43 Tower emhttpd: shcmd (60): mkdir -p /mnt/disk6 Aug 29 13:38:43 Tower emhttpd: shcmd (61): mount -t xfs -o noatime,nodiratime /dev/md6 /mnt/disk6 Aug 29 13:38:43 Tower kernel: XFS (md6): Mounting V5 Filesystem Aug 29 13:38:43 Tower kernel: XFS (md6): Corruption warning: Metadata has LSN (1:7162) ahead of current LSN (1:843). Please unmount and run xfs_repair (>= v4.3) to resolve. ... Aug 29 13:38:44 Tower emhttpd: shcmd (62): umount /mnt/disk6 Aug 29 13:38:44 Tower root: umount: /mnt/disk6: not mounted. Aug 29 13:38:44 Tower emhttpd: shcmd (62): exit status: 32 Aug 29 13:38:44 Tower emhttpd: shcmd (63): rmdir /mnt/disk6 rebuild of disk6 started but parity and disk2 were disconnected. (SMART for disk2 also OK) Aug 29 13:39:18 Tower kernel: ata1.00: exception Emask 0x10 SAct 0x100000 SErr 0x4890000 action 0xe frozen ... Aug 29 13:39:18 Tower kernel: ata1: hard resetting link ... Aug 29 13:39:18 Tower kernel: ata2: hard resetting link rebuild aborted and you formatted disk6 Aug 29 13:39:45 Tower kernel: md: recovery thread: exit status: -4 Aug 29 13:39:46 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Aug 29 13:39:46 Tower emhttpd: shcmd (107): /sbin/wipefs -a /dev/md6 You may recall I said When you format a disk in the parity array, Unraid treats this exactly like it does any other write operation. It updates parity. After formatting a disk in the parity array, parity agrees that the disk has an empty filesystem. So rebuilding a disk that has been formatted will result in an empty filesystem. Then the answer to one of your earlier questions is NO. Do you have backups? I can tell you how to rebuild parity but the connection issues you have been having will probably make this a problem.
  15. Looks like disk6 is newly formatted. Is this the disk you rebuilt? Did it ever tell you it was unmountable? And parity is disabled as mentioned. SMART for disk6 and parity looks OK. Not related to your problems, but your system share has files on the array, and it is set to be moved to the array. You want this share to be all on cache and set to stay on cache. Since dockers use this share and always have open files in this share, your docker performance will be impacted by slower parity, and your dockers will keep array disks spinning. Similarly for VMs and system/domains share, but you don't currently have VMs enabled.
  16. Writes directly to the parity array, assuming HDDs, cannot be this fast.
  17. You have a single cache drive as shown in the Balance Status section in that screenshot. Seem to remember seeing some thread like that, do you have a link?
  18. Don't have any personal experience with those. The main thing is they have separate ports for each disk. Some people try to use enclosures with only one port for multiple disks. Sometimes even USB port (USB is not reliable enough for a permanent connection). I have some other things to do right now. I will study your diagnostics and get back to you.
  19. What usually happens is you have 2 shares with the same name except for upper/lower case. Possibly one of these created accidentally when specifying a path on the server in docker mappings or something. Any top level folder on cache or array disk is automatically a user share named for the disk folder. No reason to expect these shares to have duplicate files, though you could certainly create duplicates in them. The main point is that SMB will only show one of these shares on the network because it isn't case sensitive. So it can look like something is missing because you aren't seeing the files you expect to see, since they are in the share that SMB isn't showing. Sort of similar. When you share disks on the network and you accidentally create a user share named for a disk (mistaken path specification again). SMB will only show the share or will only show the disk, so once again it looks like files are missing. I always recommend not sharing disks on the network. There are other reasons besides the one I just gave, including one that can cause data loss.
  20. What you have is already a single drive btrfs cache pool. You could reformat it as XFS but might not improve performance. How are you measuring that anyway?
  21. And your diagnostics confirm that Unraid has no ethernet connection and no IP address.
  22. This suggests you are connecting to an ethernet port reserved for this purpose. Do you have another port for Unraid to use?
  23. Do you have a link to this enclosure?
  24. Many people seem to have a very vague idea what format means. Format means "write an empty filesystem (of some specific type) to this disk". That is what it has always meant in every operating system you have ever used. Not that it matters (see below), but what filesystem did you format it with? I am guessing you mean you "replaced" a disk with the new disk. For clarity, I usually like to reserve the word "add" to mean adding a disk to a new slot in the parity array. In either case, formatting a disk before putting it in the array is completely pointless. If replacing a disk for rebuilding, the disk will be completely overwritten by the rebuild, so formatting the replacement disk before doing the replacement accomplishes nothing. If adding a disk to a new slot in the parity array, Unraid will clear it (writing all zeros) so parity is maintained. So formatting before adding accomplishes nothing. Newbie😉 I'm afraid I'm still unclear about the state of your system and its data, and what you want to do now. If all you want to do now is rebuild parity it is very easy to rebuild it to the same disk assuming that disk is OK and you don't have any other problems. Possibly your main problems are just due to bad connections. But, what about this? I think if nothing else, Diagnostics will help me understand what the current situation is better than anything you have said so far, and can server as a basis for further communication between us.
×
×
  • Create New...