• Posts

  • Joined

  • Last visited

Everything posted by FreeMan

  1. I used -f for a "fast" preclear, but I don't recall ever having used any other command line options, and I don't recall ever having had this issue in the past. As a matter of fact, I just precleared & installed a new drive a few weeks ago and didn't run into this issue. I know preclear isn't necessary anymore as the base OS will do it without having to take down the whole array for hours while it happens, I like having it as a handy disk check utility for new drives. I know there are various theories on this, it's my preference.
  2. I just ran a preclear on a new drive (using binhex's preclear docker). After 40 hours it finished with no reported issues. I stopped my array, added the new drive, and started the array. Now it says that a clear is in progress. Why would it start clearing the drive again? Did the preclear somehow fail to properly write the correct signature to the disk?
  3. When doing a manual add, I did specify HTTP, not HTTPS.
  4. I can reach it via IP in a browser from my phone (though I get warnings about it being HTTP instead of HTTPS). If there are any rules blocking it in the phone, I'm certainly not aware of it, plus, the "Discover" method of adding it can find the server (by IP, I presume?). I honestly don't have a clue what might be blocking access to the server from the phone when the phone can clearly see that the server's there.
  5. hmmm... bizarre. The address bar pic is from my desktop machine. My phone cannot resolve nas.local in a browser window, saying "site cannot be reached". When I try adding the server manually by IP address, I get: It's attempting to convert the IP to a host address, it seems, then is failing to resolve the host address back to an IP.
  6. I deleted the server. Upon adding it back in (using auto) I got this message: However, nas.local resolves just fine in my browser: Gets me to the ControlR config page w/o issue
  7. I presume deleting the server is by pressing the circled red x in the corner of the server list. There is no response to that.
  8. For the last couple of months, The controlR app on my phone (Android) has shown me my server, but I can't tap on the server to get any additional info about it, and it shows a red x in a circle next to it. I've not done any trouble shooting on this in particular, but the server's IP address hasn't changed in ages. The plug in is still running on the server. I have CA auto-update running, so the plug in should be the latest available (v2021.11.25), and I presume that my app is the latest (5.1.1), as I've got autoupdate enabled on the phone too. I just tried clicking on the "Spin Up" option on the Servers list. It popped up a little box with a spinner for a while, but nothing happened on the server itself. Any recommendations on what to check?
  9. I may have done that. However, leaving a file on cache instead of on a diskx seems odd. This does seem to be a reasonable explanation, I suppose, though. I've learned to do a copy/delete instead of move when I'm manually working with files in Krusader. I tend to avoid using Windoze for file management (somewhat) because it's a lot slower. I think I avoid most of those other situations, but certainly couldn't guarantee it. I guess that's why I'm finding this a bit perplexing. I'll just do a manual clean up (I've got several other files in this situation, too, I think). Thanks for the insight.
  10. Nope, not a clue. That's why I asked. If I have a DVD rip of a movie, then get a Blu-Ray rip by the same name, the mover won't overwrite the older file with the newer one?
  11. Looking at my TV share info, I see this: Looking at it from a terminal session, I see: root@NAS:/mnt/cache/TV/Frankie Drake Mysteries/Season 04# ls -la total 4 drwxrwxrwx 1 nobody users 20 Mar 2 2021 ./ drwxrwxrwx 1 nobody users 18 Jan 26 19:26 ../ -rw-rw-rw- 1 nobody users 331 Jan 27 2021 season.nfo root@NAS:/mnt/cache/TV/Frankie Drake Mysteries/Season 04# ls -la /mnt/disk5/TV/Frankie\ Drake\ Mysteries/Season\ 04/ total 4 drwxrwxrwx 2 nobody users 32 Jan 22 19:04 ./ drwxrwxrwx 3 nobody users 124 Jan 26 19:31 ../ -rw-rw-rw- 1 nobody users 331 Jan 27 07:16 season.nfo How is it that the second listing (on /mnt/disk5) didn't/doesn't get overwritten by the file residing on the cache drive when the mover runs? Disk5 is a reasonably full 8TB drive, but it's still got almost 700GB of free space - more than plenty to store a 331 byte file, and even then, it shouldn't matter, because the file in Cache should simply overwrite the file on the array. I can, and probably will, simply delete the file from the cache dir, but why does it seem that the mover isn't doing its job here?
  12. It's been 7 years, is this still in your server? If so, how have temps been? How has it held up to drive changes? Would you buy it again? (Do they still make it?)
  13. I just picked up an IcyDock Fat Cage and installed it in my Zalman MS800 case. Slid right in with no problems (I long ago bent down the drive mount tabs to fit my old 5x3 cages in). I was able to use one of the case's quick locks to hold the dock in place instead of using the provided screws. This is a big full-size tower case with 10 5.25" bays front accessible, so there's plenty of room for the dock. It even allowed me to gain access to an additional 15-pin power connector and run it to the very top bay to plug in the last SSD that I'd installed but hadn't yet been able to power up. (Lack of 4-pin Molex connectors to adapt to 15-pin SATA, and hadn't yet purchased a 15-pin extender.) I had a heck of a time getting the one SATA cable plugged in that goes on the MoBo side of the case, so when I have to remove the dock, I'll leave that plugged into the doc, and pull it from the MoBo, instead. I say "when" I have to remove the dock because I've just ordered a Noctua NF-B9 fan because now I'm sitting next to a vacuum cleaner. The stock fan in this is loudI! Also, I may have to return the whole thing since one of the drive trays was bent. The bottom of the tray curved into the drive. I had to flex the tray a bit to get the screw holes to line up with the drive, and it was still difficult to get the tray to slide into the dock. Because of this, the server wouldn't recognize the drive, no matter which slot in the dock it was plugged into. I put the drive into another tray and the server was most happy. I've contacted the seller to see if I can get a swap on just the tray or if I'm going to have to send the whole thing back. I haven't done a parity check yet, but in normal use (less than 24 hours since install), drive temps for the 3 drives that are in the dock are on par with the other drives, so I'm going to guess that they'll stay that way. I've got loads of little bits of packing foam, so I may try cutting some filler blocks to put into the unused trays to see if that helps improve air flow.
  14. I've got two cache pools: Name: Apps Consists of a single SSD Name: Cache Consists of a pool of 3 SSD The astute among you will see the issue here. I've already set my Apps pool cache setting to "Yes" (from "Prefer") so I can migrate data onto the array. (Involves stopping all dockers & the docker service. VM service isn't running.) Once I've got the data off the Apps and Cache pool, what's the best way to swap the names so I can have all my dockers live on the actual pool for some drive failure resistance? I'm thinking: Rename "Apps" to "temp" Rename "Cache" to "Apps" Rename "temp" to "Cache" Set Apps cache setting back to Prefer Run mover Restart docker service & dockers. Does this make sense? Is there an easier way? Have I missed something?
  15. OK. I'll wait patiently. I'm pretty sure it was more than "seconds" more like "at least a minute". I know I'm an impatient fella, but it really was slower than "seconds". As a matter of fact, I'd waited a bit, then I typed up this question and it still hadn't shown up. Could just be that my machine isn't the fastest thing out there. I'll be sure to be patient in the future. Thanks as always!
  16. Forest, meet trees. Sheesh. I did actually look at that, but it just didn't register. `If set to 'Yes' then if the device configuration is correct upon server start-up, the array will be automatically Started and shares exported.` I am set to "Yes". However, since the config wasn't correct, it didn't auto start. ------------------------------- OK, that small drama is resolved. However, I'm still curious what caused the server to recognize that the drive was there. Was it: 1) Passage of time? i.e. the server polls every couple of minutes looking for a drive to "magically" appear (expecting that there's a hot-swap cage and it might.) 2) I looked at the right setting somewhere that caused it to rescan drives and notice that the disk was now there? If it's the first, I now know to be patient and wait. If it's the second, I'd like to know what I looked at so I can trigger it intentionally the next time.
  17. I'm going to guess that no, I do not. However, I don't recall where that setting is and a quick browse hasn't turned it up, so I can't confirm
  18. huh... After poking through a variety of settings, the Main screen is now showing the drive is there. I would like to know if I did something (I looked at settings but didn't change anything) or if it's just a matter of time before it'll notice that the drive is now available.
  19. When the server booted, the array didn't start and it shows: So, if you would, please have a chat with my server and let it know what it's supposed to be doing. TBH, that does make sense that it should have started the array with a missing disk. However, it didn't and here we are... As it turns out, the caddy I put that particular disk in was slightly bent fresh out of the box. Again - an issue to take up with the vendor. Is there a way to now get unraid to recognize that I've plugged the disk back in other than rebooting the server?
  20. Sorry, I wasn't clear enough: The array never started because one drive was unrecognized on boot. I moved that drive to a different slot in the cage and it powered up. Is there a way, short of a reboot, of getting UNRAID to notice that the disk is now there?
  21. I just installed a new Icy Dock Fat Cage hot-swap 3x5. I put the same disks in it as were in a non-hot-swap cage. When I power it up, it seems that one of the slots is not receiving power (an issue to take up with the vendor). Because of this, the array didn't start because a drive was missing. I pulled the drive from the dead slot and put it into an empty slot (I'm only using 3 of the 5 slots right now), and the power light came on indicating that the cage recognized there is a drive there. How do I get UNRAID to recognize the drive is there without rebooting the server? That is, after all, the whole point of hot-swap cages (well, at least one of the points).
  22. On a slightly more serious note... I just noticed that the log for this disk (from Unassigned Devices) has a multitude of these entries: Mar 3 10:51:16 NAS emhttpd: spinning down /dev/sdn Mar 3 11:21:17 NAS emhttpd: spinning down /dev/sdn Mar 3 11:51:18 NAS emhttpd: spinning down /dev/sdn Mar 3 12:21:19 NAS emhttpd: spinning down /dev/sdn Mar 3 12:51:20 NAS emhttpd: spinning down /dev/sdn Mar 3 13:21:21 NAS emhttpd: spinning down /dev/sdn Mar 3 13:51:22 NAS emhttpd: spinning down /dev/sdn Mar 3 14:21:23 NAS emhttpd: spinning down /dev/sdn Mar 3 14:51:24 NAS emhttpd: spinning down /dev/sdn Why would the OS be trying to spin down a disk every 30 minutes during a preclear run? (Yes, sdn is the device that I'm preclearing.)
  23. Duuude... This disk is awesome!!!! 🤣
  24. This does seem to be the proper steps. I'd suggest searching the main support forum as I know this has been asked dozens of times at least. You may want to raise a separate question about that one. Also, make sure nothing else is writing to disk as that may effect DSP's ability to change permissions.
  25. Mine are standard SATA SSDs - maybe that's the difference. AIUI, there are some issues with NVME devices here and there, but I don't know all the details. Maybe do some searches throughout the forum to see what you can turn up. Make sure you know which version of UNRAID they refer to because I know support has been improving.