Jump to content

darckhart

Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

0 Neutral

About darckhart

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. OK so I was reading the help portion again, and, being a newb, I think it's a little confusing the way it is written: When creating a new share and presented with the 'use cache disk?' option, here's how I interpret the help messages: 1. No - makes sense. Don't bother using the cache disk at all, so don't touch it. All new files go straight to array. 2. Yes - makes sense. Use the cache disk as intended (copy new files into the cache temporarily, because they will then be moved from cache to array at a scheduled time, emptying out the cache). 3. Only - doesn't makes sense on first read. but makes sense after realizing that it will use the cache disk unlike a cache disk (by making the files reside here and only here permanently). ie, use the cache disk instead of the array. 4. Prefer - doesn't make sense on first read. but makes sense after considering option 3. Here it means prefer to use the cache disk instead of the array as long as space is available on the cache disk.
  2. OK great thanks. I re-read the help info and 'Yes' option makes sense now. Regarding the settings, I checked and I do have logging 'enabled' for mover, so I suppose it will show the actions when mover kicks in. Thanks for your quick help!
  3. I tried looking up in the wiki about the 'mover' command but did not find what I was looking for: where is the log file? what verbosity is it set to? problem: I want to know if mover worked because it seems like it did not. (did not move files from my cache pool into my array.) maybe I have the incorrect settings? 3 days ago I created a new user share and set the 'use cache disk' setting to 'prefer' I copied 500 GB of data off my usb drive to the share. I watched the cache pool fill up. I read that mover executs at 340a, so I figured I'd check on it in a couple days. Today I browse to the "shares" tab, check the location of this share and it still says "cache" Next, I check the system log, and use 'ctrl-F' to search the word 'mover' where I do see the lines where it executes which it has done faithfully ever night. But basically, it says 'mover: started' and next line is 'mover: finished' one second later. Seems like it hasn't done anything in 3 days even though it's being triggered? Just for kicks, I tried to manually execute mover ... and same result. starts and finishes in 1 sec. Have I configured something incorrectly? I am on Version 6.7.0
  4. yea I figured it would be a box in a box in a box issue.
  5. hi there, thanks for the hard work! just installed today. not sure if it's JF's problem or not, but trying to disable "remote connections" will bork the whole thing, ie, cannot access JF even on lan at this point. (dashboard, settings, advanced, uncheck "allow remote connections")
  6. mini-heatwave (105+) this week, so waiting until it cools down before testing..
  7. Thanks all! Great comments! This is an old dell rack server recommissioned for unraid duty. Reading through the posts, I feel my HDD temps are high-ish but fine, CPU temps are fine (parity check seems to be pretty low cpu use), ran memtest overnight and that came up with zero issues too. I'll also use the FCP plugin so I can see any logs it saves to the flash drive, though I ran a tail from a telnet and it never got any warning messages before the connection dropped when the server shut itself off. But PSU hmm that might be a good one to check. Mine has the redundant PSUs so could either one trigger a shutdown if it got hot? I suppose I'll have to purchase a spare and see.
  8. Yea I've got it all opened up and all fans are running. I even brought in a desk fan to point directly at things to make sure they're cool as can be. Tried again this morning and it seemed to shut down in about the same place again as last night, about 5 hrs in, 3.33 ish TB checked of 6 TB total, soo it looks like it gets stuck about 50-55% sigh. HDDs were about 45-48C which I guess makes sense after running parity for 5 hrs.
  9. ah well nope. it didn't make it through. failed sometime after 5 hrs. However, on the reboot, saw that it got stuck on "Checking NVRAM" stage.
  10. I thought it might be heat related as well, maybe some thermal shutoff. but it's chugging along fine now and nothing's changed since the last restart. The system had been powered down for a few weeks like since 17th when I went on vacation, so maybe it just needs time to "get in the groove" again... stupid computers... I'll keep an eye on it, assuming it finishes the check tonight, and if nothing else happens, guess we'll call it hardware gremlins..
  11. 36 min elapsed and still going this time, so not sure what the deal is...
  12. OK shoot it just shut down the whole system again!! 26 min elapsed, 335 GB in.
  13. Hi all, I need some help figuring out how to troubleshoot this. Twice now, I started a parity check on the array and after about an hour or so (wasn't really paying attention), I come back and find the entire system powered off. Upon reboot, it says it did not complete parity check (duh) but I can't discern any reason for why it would shut itself off. My first thought was to check the logs, but apparently logs aren't preserved through power off.. I've started the check again and am "monitoring" progress thru the webgui. Hard drive temps look fine (mid 30s C), throughput looks fine (~215 MB/s), estimated time looks like ballpark of what it said when it worked few weeks ago (8 hr). Any ideas what would cause it to shut down the whole system during parity check?
  14. Oh OK. I was editing "unraid share path" field instead of "host path 2" field on the template. Thanks.
  15. hello support, I tried "Adding" this today and got the following error so I can't even install it. Is something off in the default configuration so that some of these options wont' go through? thanks for any help. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="resilio-sync" --net="bridge" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8888:8888/tcp -p 55555:55555/tcp -v "":"/sync":rw -v "/mnt/user/":"/unraid":rw -v "/mnt/cache/auto/appdata/resilio-sync":"/config":rw linuxserver/resilio-sync docker: Error response from daemon: Invalid volume spec ":/sync:rw": volumeinvalid: Invalid volume specification: ':/sync:rw'. See '/usr/bin/docker run --help'. The command failed.