Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Make sure you have Notifications setup to alert you immediately by email or other agent if a problem is detected.
  2. Seems like a problem with Flash. Go to Tools-diagnostics and attach the complete Diagnostics zip file to your next post.
  3. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your next post. I may split your part of this discussion into its own thread.
  4. This is the first time in this discussion with you that the word FORMAT has been used. Format has a very specific meaning, and misunderstanding that meaning has caused some people to lose data. Unraid must format any disk it will use in the parity array or cache pool. You must let Unraid do the format after adding the disk. And if you add a disk to an array that already has valid parity, it must be clear before it can be formatted. A clear disk and a formatted disk are NOT the same thing. A clear disk is all zeros. A formatted disk has an empty filesystem written to it (this is what FORMAT has always meant in every operating system you have ever used). That empty filesystem is "metadata", data about the data. The metadata of a newly formatted disk represents an empty top level folder, ready to accept new files and subfolders. A mistake people often make is telling Unraid to format a disk that is already in the array and already has data on it. They think they can recover that data from parity. But format is a write operation (writes an empty filesystem), and Unraid treats that write exactly as it does any other, by updating parity. So after a format of a disk in the array, parity agrees that the disk has an empty filesystem. All this may be more than you wanted to know, but maybe it will make you reconsider before you make a mistake in the future.
  5. Unraid clears a disk when you add it to a new data slot in an array that already has valid parity. This is the only scenario where Unraid requires a clear disk. It does this so parity will remain valid. A clear disk is all zeros and those zeros have no effect on parity. Older versions (a few years ago now) would take the array offline to clear a disk, so preclear was created to clear a disk before adding it, so the array didn't have to go offline to clear the disk. Current versions of Unraid clear the disk without taking the array offline, so that purpose of preclear isn't really needed anymore. Many people still use preclear to test new disks. You can also use other methods to test a new disk, such as the diagnostics provided by disk manufacturers as free downloads.
  6. These did lose compatibility but there were some patches made at some point. I think it might be a little disorganized and a bit of trouble for some trying to figure out how to get it all working again.
  7. Have you successfully used those scripts with the latest Unraid?
  8. Any idea what these are all about? Aug 1 00:23:22 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt/rawphotos (/): not exported Aug 1 00:23:22 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt (/): not exported Aug 1 00:23:22 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt/rawphotos (/): not exported Aug 1 00:23:33 LASTIC rpc.mountd[16044]: can't get hostname of 192.168.1.52 Aug 1 00:24:18 LASTIC emhttpd: req (22): shareNameOrig=JPEG+SPORT+PROCESSED&shareName=JPEG_SPORT_PROCESSED&shareComment=Processed+Pictures+SPORT&shareAllocator=highwater&shareFloor=25000&shareSplitLevel=&shareInclude=&shareExclude=&shareUseCache=yes&cmdEditShare=Apply&csrf_token=**************** Aug 1 00:24:18 LASTIC emhttpd: shcmd (3150): mv '/mnt/user/JPEG SPORT PROCESSED' '/mnt/user/JPEG_SPORT_PROCESSED' Aug 1 00:24:18 LASTIC emhttpd: shcmd (3151): mv '/boot/config/shares/JPEG SPORT PROCESSED.cfg' '/boot/config/shares/JPEG_SPORT_PROCESSED.cfg' ... Aug 1 00:24:54 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt/JPG_SPORT_PROCESSED (/): not exported Aug 1 00:24:54 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt (/): not exported Aug 1 00:24:54 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /mnt/JPG_SPORT_PROCESSED (/): not exported Aug 1 00:25:05 LASTIC rpc.mountd[16044]: can't get hostname of 192.168.1.52 Aug 1 00:25:11 LASTIC rpcbind[22113]: connect from 192.168.1.52 to getport/addr(mountd) Aug 1 00:25:11 LASTIC rpcbind[22114]: connect from 192.168.1.52 to getport/addr(nfs) Aug 1 00:25:11 LASTIC rpcbind[22115]: connect from 192.168.1.52 to getport/addr(nlockmgr) Aug 1 00:25:11 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /JPG_SPORT_PROCESSED (/): not exported ### [PREVIOUS LINE REPEATED 1 TIMES] ### Aug 1 00:25:22 LASTIC rpc.mountd[16044]: can't get hostname of 192.168.1.52 Aug 1 00:25:53 LASTIC rpcbind[22221]: connect from 192.168.1.52 to getport/addr(mountd) Aug 1 00:25:53 LASTIC rpcbind[22222]: connect from 192.168.1.52 to getport/addr(nfs) Aug 1 00:25:53 LASTIC rpcbind[22223]: connect from 192.168.1.52 to getport/addr(nlockmgr) Aug 1 00:25:53 LASTIC rpc.mountd[16044]: refused mount request from 192.168.1.52 for /JPEG_SPORT_PROCESSED (/): not exported ### [PREVIOUS LINE REPEATED 1 TIMES] ### Aug 1 00:26:03 LASTIC rpc.mountd[16044]: can't get hostname of 192.168.1.52 ... and so on Something seems to be trying to access things that don't actually exist on Unraid.
  9. Post your Unraid diagnostics when you encounter the problem.
  10. Many of us are using Unraid with Windows and we don't encounter this problem. Have you tried any of the Frank1940 suggestions? Are you mapping drives in Windows? I haven't seen any need to do this in many, many years. Most applications can browse the network and work with files on other computers just fine without assigning a drive letter to network shares.
  11. Nobody else in this thread has mentioned USB enclosures, so perhaps saying you have the same issue is jumping to conclusions. In any case, USB, though allowed, is not recommended for drives in the array or cache pool, because some implementations are unreliable.
  12. Maybe different than problems others are having in this thread. Some of your disks are pretty full and these are still using reiserFS. That filesystem is known to perform poorly when disks get full. Also, your appdata and system shares have some files on disk1, which is one of those nearly full reiserFS disks. Those shares should be completely on cache to avoid performance issues like you are seeing.
  13. Of course. At the bottom of the Update Container page, click the '+'.
  14. A new builtin feature is planned for this.
  15. Go to Tools-diagnostics and attach the complete Diagnostics zip file to your next post.
  16. If you read the rest of the thread you have posted to, you may realize that all these people here are running Windows in a Virtual Machine. Our forum is about the Unraid OS, which in addition to being a NAS, allows Virtual Machines. It's possible that someone will make some suggestion regarding your laptop problem, but various issues with Windows on a laptop is not really the purpose of our forum. Take a look around and maybe you will get a better idea of what Unraid is about. Many people find it very useful.
  17. What does your post have to do with our forum?
  18. So the XFS filesystem on the emulated disk2 likely has no contents. @driekus77 Was there supposed to be any data on that disk? Possibly all written data had gone to disk1 up to that point due to allocation method. Do you know if disk2 should have had any data on it? Just in case you don't know, formatting a disk in the array will not allow you to recover any of its data from parity. As already mentioned, your array is not protected currently. Probably there is nothing to be gained by trying to rebuild the missing disk at this point, and instead you just need to rebuild parity without it. I won't elaborate on exactly how to do that until we hear more from you.
  19. What "state" exactly are you referring to?
  20. Not so fast. You are definitely not finished with this situation. Probably we (or at least, I) shouldn't have even cooperated with their desire to format a disk that was in the array without digging in a little deeper to find out why they even wanted to do that. Possibly they have already lost data. Looks like Unraid cleared the physical disk before adding it to the array as disk4, as it should and as indicated by the very large number of writes to the disk. In the original screenshot, disk2 was unmountable, but in this latest screenshot, the emulated disk appears to have an XFS filesystem, though possibly empty or nearly so. In any case, disk2 does need to be rebuilt, or a New Config with parity rebuild needs to be done.
  21. It's not. Answered many times in the letsencrypt thread. You can go directly to the correct support thread for any of your dockers by clicking on its icon and selecting Support.
  22. Just to make sure this isn't user error. Did you actually check the box to enable the format button? Not according to that screenshot.
  23. Might be worth noting that you have posted your qnap question on the Unraid forum. Nobody on this thread mentioned qnap.
×
×
  • Create New...