Tweak3D

Members
  • Posts

    21
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Tweak3D's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. I've only ever seen it on forced shutdowns previously. My config is mostly unchanged for the last several revisions of unraid. I only stopped looking into it as others reported the same issue and most said it was likely a bug. I see now that it was also called out for those of us impacted that the logs were showing that the system wasn't waiting for the disks to shut down properly, and by toggling the field for disk timeout, this seemed to correct it (Post for reference: Parity check running when starting array every time - General Support - Unraid) and that seems to have corrected the issue for me as well. I changed the value to 100, saved, changed it back to 90 (what it was previously) and it looks like i'm good to go now. Hopefully this helps if others are having the same issue.
  2. I've been having this issue since 6.9.1 where a reboot or a shutdown causes unraid to force a parity check when it comes back online. I saw that lots of others (but not everyone) were having this issue. Just reporting that this does not appear to be fixed in 6.9.2 :(. If there are any details I can provide to help get it fixed, i'd be more than happy to!
  3. spoke too soon, found the issue. Cache Directories was causing the issue even in safe mode oddly enough. I noted an error referencing the old drives share and removed it and now it formatted as expected.
  4. no luck, same result :(. I may pull the drive and wipe it on a separate machine and then reinstall and let unraid do its thing again and see if that makes any difference unless you have anything else I can check
  5. I moved a drive from unassigned devices that I was no longer using to my array as i needed some additional space urgently. The drive precleared without issue and i've tried twice now to format the drive on the array and it claims it starts, but the unraid GUI gets stuck in a funny state (can't look at logs, it claims the array isn't started, but it is, no stats update) and it just never formats the drive. I can reboot and everything goes back to normal, but the drive still isn't formatted. Not sure how else to proceed. I've attached my diag, but I don't see anything in the logs that i'm familiar with that indicate what the issue. The disk i'm adding is disk 13. also, I am currently running 6.8.2. Thanks for any assistance/guidance anyone can provide. hoofs-diagnostics-20200314-0746.zip
  6. Good morning, Just wanting to confirm something before I move forward and cause myself pain. I need to add a new drive and would like to also upgrade an existing drive. Can I do this in one motion (shutdown, add the new drive, replace the old drive, restart unraid with the array stopped, update the drive assignments and let it do its thing)? I am running a 2 drive parity, but would prefer to not be in an unsafe state during this upgrade, so thats why i'm checking here first. Since both new drives need to pre-clear first, I imagine that if I don't preclear on my own, both drives will do the mandatory 1 pass pre-clear and only the existing disk will require using the parity while that and the drive rebuild occurs. Is that correct? If I am incorrect here, no issues, I can run the add disk first and then upgrade disk separately, just trying to save some time as we all this process can take a while. thanks!
  7. I got my instance up and running behind my proxy and it works great. Also confirming that by adding that variable to my template manually, it works perfectly! just be aware that the value of "false" is case sensitive, so keep it all lower case and name the variable with the name from the documentation and it works as expected.
  8. I've been fighting the same issue for weeks, but didn't realize Chrome was to blame. Tried IE11 and sure enough works without issue. i can also connect via the mobile app without issue, so something is definitely up. also, interestingly enough, the Bitwarden addin for Chrome works fine, just not the web interface.
  9. So just to follow up, Bitwarden is now loading and I can get to the login and create account screen, but hitting the submit button doesn't work when the form is filled out. I can cancel without issue and if I leave a required field blank it prompts me saying to fill in the corresponding empty fields. I've deleted and recreated the container, the docker image, and deleted my appdata/bitwarden directory. I don't see any errors in the docker container log at all anymore, but I don't know where else I can look for logging as I am unfamiliar with this docker. Any assistance would be much appreciated. Thanks!
  10. ok, I deleted what I had, disabled direct i/o and it is installing now without issue, but now I cannot register an account, I get to the page, fill it out and click submit and nothing happens. Browser doesn't even indicate that anything is occuring. No errors or anything. So thats kinda odd. Any thoughts on that?
  11. its ok, I apprecaite the help. I ended up copying the file db.sqllite3 from the bit warden directory on my UD to the appdata folder and overwrote what it had tried to made. I then remapped the docker to point back and my cache drives and now it works fine. I'll do some more testing, but super odd issue.
  12. New directory on cache pool does not work. New directory on UD works fine as XFS and/or BRTFS I checked the perms on the Cache disk and that doesn't make a difference. I manually ran moved and moved off several hundred gigs of data to free up so blocks in a different part of the SSD, same issue. FS check in maintenance mode came up clean BTRFS scrube came up clean. both short and long SMART Tests passed without issue. No other errors in any other dockers, VMs, or unraid that I can see So it is something with my cache pool. but not exactly sure what. Any other suggestions I can try before I start updating firmware?
  13. It could, but i'm not seeing any other issues with anything else. Nothing in the logs, smart tests, etc. I have dual Enterprise Intel p3600 1.6tb SSDs as my cache drive where this is running. I guess I can run a check on the FS on those 2 disks, but i'd think that an error of this nature would come up in various places and not with one very specific docker image. I'm fairly new to troubleshooting docker issues though, so it could very well be. I'll take a look.
  14. I cannot get bitwarden to load at all. I've deleted and reinstalled multiple times and i'm only using default settings, so not sure what exactly is going on. Any assistance would be much apprecaited. The Docker never starts and this is all I see in the logs: First run: JWT keys don't exist, checking if OpenSSL is available...OpenSSL detected, creating keys...Keys created correctly.thread 'main' panicked at 'Can't run migrations: QueryError(DatabaseError(__Unknown, "disk I/O error"))', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace. Subsequent Runs: thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.
  15. Anyone? Performance is back to normal after mover stopped. Dockers themselves don't seem to be impacted, but they are also very slow when accessing the array. Seems to be worst when the files being moved by mover are very small, like in my attached logs, where you can see it moving the plex appdata back to my new cache drive from the array. Seems very odd, but could really use some assistance in troubleshooting as i'm not seeing much info out there.