Jump to content

Squid

Community Developer
  • Posts

    28,770
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. It's not /mnt/disk that's the problem, its Dec 11 19:06:10 MrERocker root: Fix Common Problems: Error: Invalid folder .dbus contained within /mnt Dec 11 19:06:10 MrERocker root: Fix Common Problems: Error: Invalid folder log contained within /mnt Dec 11 19:06:10 MrERocker root: Fix Common Problems: Error: Invalid folder xdg contained within /mnt Post the docker run command for Krusader https://forums.unraid.net/topic/57181-docker-faq/#comment-564345
  2. How do you know that it's this plugin causing it?
  3. Sight unseen without seeing diagnostics, mover will not move anything that is currently open. This could mean being watched or seeding etc
  4. You sure it's not Plex indexing the files or another app / vm running?
  5. You should post your diagnostics, so that people in the know will have all the relevant info.
  6. Squid

    UNRAID & Plex

    This. But you actually mean Plex as a docker app
  7. Did you just change the name for Tower from Tower2? The diagnostics in the syslog for "Tower" show that it's name was Tower2 Having 2 servers named the same would cause nothing but problems, and you may need to clear out the DNS cache or something from Windows in order for it to reflect that change you made. (I've never been a fan of unRaid having a static default name for the servers for this very reason)
  8. Bug2 is also sorta windows related, as the flash drive is FAT32 (originally a MS filesystem), and a file cannot be renamed from TV to tv, as they are the same filename.
  9. It's in the diagnostics, although not explicitly in "English"
  10. One thing to be aware of. If I understand correctly, the version of the ZFS plugin has to match the Kernel versions used by the OS. RC7, RC8, and RC9 all utilize different kernel versions, and the plugin has to match that.
  11. I think you mean Generic Error Post the diagnostics
  12. Reload the page. The Hot plug page isn't dynamic, and only picks up the running VMs when you enter the VM page
  13. The mover is running message has never updated itself to reflect when mover is finished. A reload of the page is required to see its current state.
  14. OK, After some testing, I'll allow dashes in searches, and anything non alphanumeric will get switched to a space.
  15. There was a reason for that. Of course since that code was done a few years ago, I can't remember what the reason actually was
  16. The exact same error (ie: Network Failure)? The link in @Primal51's post is 100% correct If clicking the link from Chrome doesn't bring up the raw .plg then you've got other issues with your router / modem / isp / Country that is outside of what I am in control of. Thus far though with 3 separate users saying "same error" which leads me to read that as "Network Failure", no one has responded with requests for diagnostics, or if their Gateway / DNS addresses weren't typo'd
  17. And if it's not on a ZFS pool, then you're attempting to put it onto a UD drive that isn't mounted Dec 9 07:28:16 OBI-WAN unassigned.devices: Disk with serial 'INTEL_SSDSC2KB960G8_PHYF911201UY960CGN', mountpoint 'INTEL1TB' is not set to auto mount and will not be mounted... (this line is 2 seconds before the mount of the image and since it doesn't exist, then the docker image winds up going to RAM)
  18. Dec 9 07:28:18 OBI-WAN emhttpd: shcmd (117): /usr/local/sbin/mount_image '/mnt/INTEL1TB/docker/docker.img' /var/lib/docker 50 Is this mounted on a ZFS pool? If so, you'd have to recreate the issue onto the cache pool.
  19. Assuming that after you reassign everything correctly, and the disk still comes up as unmountable then instead of hitting format, try running the File system checks https://wiki.unraid.net/Check_Disk_Filesystems against it. *IF* it even runs, there will definitely be at least some data loss but maybe it won't be much.
  20. Sure. That won't make a difference either way.
  21. (or, take a shot and assign the disks as per the original diagnostics and see what happens - another New Config will be required)
  22. This is your disk assignments from the original diagnostics you posted Dec 8 11:08:21 WenteServer kernel: md: import disk0: (sde) MB2000GCWDA_Z1X2VG38 size: 1953514552 Dec 8 11:08:21 WenteServer kernel: mdcmd (2): import 1 sdf 64 1953514552 0 MB2000GCWDA_S1X0F98D Dec 8 11:08:21 WenteServer kernel: md: import disk1: (sdf) MB2000GCWDA_S1X0F98D size: 1953514552 Dec 8 11:08:21 WenteServer kernel: mdcmd (3): import 2 sdd 64 1953514552 0 ST2000VM003-1CT164_W1H1ABSL Dec 8 11:08:21 WenteServer kernel: md: import disk2: (sdd) ST2000VM003-1CT164_W1H1ABSL size: 1953514552 This is your disk assignments on the last set of diagnostics Dec 8 13:17:15 WenteServer kernel: mdcmd (1): import 0 sdd 64 1953514552 0 ST2000VM003-1CT164_W1H1ABSL Dec 8 13:17:15 WenteServer kernel: md: import disk0: (sdd) ST2000VM003-1CT164_W1H1ABSL size: 1953514552 Dec 8 13:17:15 WenteServer kernel: mdcmd (2): import 1 sde 64 1953514552 0 MB2000GCWDA_Z1X2VG38 Dec 8 13:17:15 WenteServer kernel: md: import disk1: (sde) MB2000GCWDA_Z1X2VG38 size: 1953514552 Dec 8 13:17:15 WenteServer kernel: mdcmd (3): import 2 sdf 64 1953514552 0 MB2000GCWDA_S1X0F98D Dec 8 13:17:15 WenteServer kernel: md: import disk2: (sdf) MB2000GCWDA_S1X0F98D size: 1953514552 The important thing to take away here is that after You assigned the drives incorrectly, and swapped around your original parity drive with the data drives. This resulted in the "new" disk1 being unmountable and the parity rebuild (which your screenshot shows at the bottom as being in progress) trashing the data on what should have been disk1 (now the parity). If the data that is now trashed was irreplaceable, then maybe something like UFS explorer would be able to recover it (if you immediately stop any parity rebuild from happening to prevent further corruption) If it's replaceable, then just go ahead, format the drive, make a big note somewhere about the drive assignments so that this doesn't happen again.
  23. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?tab=comments#comment-480421 Backup whatever you need though, as I wouldn't be surprised if all data in the pool gets lost in the process
×
×
  • Create New...