Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Penny wise but ... Just to make sure you understand how this all works. Parity by itself cannot recover anything. If you have a disk get disabled or go missing, Unraid must be able to reliably read every bit of all remaining disks in order to reliably rebuild the disabled or missing disk. So it is important that all disks in the array be reliable. And the more disks you have the more chance you have of one of them needing to be rebuilt, and the more chance you have of one of the remaining but necessary disks also giving trouble. Such as the situation you were in, trying uselessly to rebuild disk5 when many other disks were not working as needed. And it doesn't take an actual disk failure for a disk to become disabled. Your disk5 was disabled even though it hasn't really failed. Unraid disables a disk any time a write to it fails for any reason since the failed write makes it out of sync with parity and it needs to be rebuilt. Those are usually communication problems, but the counts will not reset even if you fix the problem. You can acknowledge these on the Dashboard and they will not give another warning unless the count increases. Other than those it just looks like a few Runtime Bad Blocks on each of those disks giving warnings. After you get your controller sorted maybe it will be OK as long as you are careful and diligent. Careful - always double check all connections, power and SATA, including power splitters, any time you are mucking about in the case. Diligent - Setup Notifications to alert you immediately by email or other agent when a problem is detected so you can fix a single problem before it becomes mutliple problems and data loss. You have already done a good thing by simply coming to this forum and asking for help. We really hate to see someone try random things before asking here just making a bad situation worse. And, of course, parity is not a backup. You must always have another copy of anything important and irreplaceable.
  2. You have enabled Docker and VM services, so they created their image vdisks on disk1. Disable those services, delete the images, and recreate them so they will go on cache where they belong. It is a common problem when people enable these before installing cache. And mover can't really help by itself because it can't move open files, so the services have to be disabled. Simpler to just go ahead and recreate them at the same time.
  3. The plan is for this to become builtin instead of a plugin.
  4. Your system share has files on the array. Possibly you enabled Dockers or VMs before installing cache so the images got put on the array where their performance will take a hit due to parity and they will keep array disks spinning. Do you have any dockers or VMs yet?
  5. If you have or intend to have Dockers or VMs you don't want all shares on the array either. Post those diagnostics so we can make more detailed recommendations on your setup.
  6. I'll probably split your posts and its replies into its own thread so we don't get things mixed up here. In other words so Misjel's thread isn't hijacked. Often even when someone thinks they have the exact same problem it isn't or at least it has different causes.
  7. A dump at the end of that one, but you say it is running fine. Have you done a memtest?
  8. I guess it's possible but he doesn't seem to be aware there is a filesystem by that name, so how did his array disks wind up as XFS if they also came from V5?
  9. I was referring to your plex log which you attached as a pdf. Unraid does not produce any pdf files so I don't know what you did to get it in that form. The other attachments were fine and as expected. Memtest is on the boot menu. You have to select it using the keyboard during the bootup process instead of booting Unraid.
  10. Unrelated but your system share has files on the array. Possibly you enabled dockers and VMs before installing cache so they went to the array where they will take a performance hit due to parity and will keep array disks spinning. We can deal with that later after your more serious problem is addressed.
  11. While we wait to see if you can get additional diagnostics I will work with what you have here. One thing puzzling in your diagnostics but probably unrelated to any problem. Why is your cache and only cache reiserFS? The syslog you attached separately is earlier than the one in the diagnostics, and it ends with a dump so possibly a crash. Then the syslog in the diagnostics are after rebooting after that. In future, please don't attach pdfs when a simple text file will do. The simple text file is easier to work with. I don't notice anything particularly meaningful in your plex log. There are several things I would like to change about the way you have your user shares configured but we can save that for later. You have way more disks than I want to wade through SMART reports for. We can look at those later also. Nothing obvious in the syslogs about disk problems. Go ahead and try what Frank1940 suggested about getting more diagnostics. Possibly it won't work but maybe. If you can't shutdown cleanly then you will have to do it the hard way. Then when you reboot, select Memtest from the boot menu and let that run for a while.
  12. Lots of commentary on SSDs in the array so I won't discuss that and I don't have any experience with that anyway. As for your original idea. There are a couple of things that come up when trying to actually implement that idea. User Shares and Mover. Mover doesn't copy, it only moves. So some other solution would need to be implemented to get your cache backed up to the array. Cache is part of User Shares, so having identical files and folders on both cache and array would mean duplicated files and folders in the User Shares. I'm not sure which would win out when accessing the User Shares.
  13. Likely you set some shares to cache-prefer instead of cache-yes. Prefer means Go to Tools-diagnostics and attach the complete Diagnostics zip file to your NEXT post.
  14. You will have to use a different flash drive to install Unraid.
  15. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  16. I don't see anything in those volume mappings that would create those folders. Where exactly did you type /mnt/disk?
  17. That link he posted tells you exactly how to get the docker run command. Here is the relevant portion from that linked post: Click on your Krusader docker and select Edit. Make any change then change it back and click Apply. This will reinstall the docker and should give a result similar to that shown, the docker run command. Copy and paste that into your next post.
  18. This sounds like a disaster waiting to happen. Is there still a filesystem on the original that Unraid might mount?
  19. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post. You must have some docker mapped to that path so it keeps getting recreated.
  20. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  21. Dec 11 01:46:32 Ceto sshd[5691]: Failed password for root from 103.79.143.244 port 61984 ssh2 [Vietnam] ... Dec 11 02:49:39 Ceto in.telnetd[25834]: connect from 80.55.12.110 (80.55.12.110) [Poland] ... Dec 11 03:05:05 Ceto in.telnetd[5835]: connect from 151.229.216.185 (151.229.216.185) [UK] Dec 11 03:16:10 Ceto in.telnetd[14658]: connect from 103.69.245.49 (103.69.245.49) [India] Dec 11 03:27:38 Ceto in.telnetd[24221]: connect from 170.238.137.112 (170.238.137.112) [Brazil] ... Dec 11 03:47:16 Ceto vsftpd[7670]: connect from 184.105.247.195 (184.105.247.195) [US] ... Dec 11 05:21:52 Ceto in.telnetd[19931]: connect from 118.171.144.186 (118.171.144.186) [Taiwan] Dec 11 05:23:21 Ceto in.telnetd[21251]: connect from 120.26.4.30 (120.26.4.30) [China] ... Dec 11 06:00:30 Ceto in.telnetd[19219]: connect from 73.8.29.225 (73.8.29.225) [US] ... Dec 11 06:32:00 Ceto in.telnetd[12717]: connect from 187.162.93.201 (187.162.93.201) [Mexico] ... Dec 11 07:54:37 Ceto in.telnetd[14986]: connect from 180.251.223.147 (180.251.223.147) [Indonesia] ... Dec 11 08:54:44 Ceto in.telnetd[32571]: connect from 190.186.147.109 (190.186.147.109) [Bolivia] Dec 11 10:04:43 Ceto in.telnetd[24991]: connect from 36.106.141.144 (36.106.141.144) [China] ... Dec 11 11:07:21 Ceto in.telnetd[10999]: connect from 58.8.90.13 (58.8.90.13) [Thailand] ... Dec 11 11:20:50 Ceto in.telnetd[22028]: connect from 114.34.125.35 (114.34.125.35) [Taiwan] ... Dec 11 12:09:55 Ceto in.telnetd[29444]: connect from 175.100.101.142 (175.100.101.142) [Cambodia] ... Dec 11 14:28:47 Ceto in.telnetd[12243]: connect from 177.157.146.107 (177.157.146.107) [Brazil] ...
  22. I merged your threads, looking at diagnostics now, and just as I expected, you are getting login attempts from all over. Take your server off the internet!!! NOW!!!
  23. How were you connecting to it from WAN anyway? You don't give any details about VPN or anything. You shouldn't put your server directly on the internet. Your diagnostics link doesn't work for me and it's preferred if you don't link to external sites for that anyway. Attach it to your NEXT post.
×
×
  • Create New...