Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Tweak3D

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. spoke too soon, found the issue. Cache Directories was causing the issue even in safe mode oddly enough. I noted an error referencing the old drives share and removed it and now it formatted as expected.
  2. no luck, same result :(. I may pull the drive and wipe it on a separate machine and then reinstall and let unraid do its thing again and see if that makes any difference unless you have anything else I can check
  3. I moved a drive from unassigned devices that I was no longer using to my array as i needed some additional space urgently. The drive precleared without issue and i've tried twice now to format the drive on the array and it claims it starts, but the unraid GUI gets stuck in a funny state (can't look at logs, it claims the array isn't started, but it is, no stats update) and it just never formats the drive. I can reboot and everything goes back to normal, but the drive still isn't formatted. Not sure how else to proceed. I've attached my diag, but I don't see anything in the logs that i'm familiar with that indicate what the issue. The disk i'm adding is disk 13. also, I am currently running 6.8.2. Thanks for any assistance/guidance anyone can provide. hoofs-diagnostics-20200314-0746.zip
  4. Good morning, Just wanting to confirm something before I move forward and cause myself pain. I need to add a new drive and would like to also upgrade an existing drive. Can I do this in one motion (shutdown, add the new drive, replace the old drive, restart unraid with the array stopped, update the drive assignments and let it do its thing)? I am running a 2 drive parity, but would prefer to not be in an unsafe state during this upgrade, so thats why i'm checking here first. Since both new drives need to pre-clear first, I imagine that if I don't preclear on my own, both drives will do the mandatory 1 pass pre-clear and only the existing disk will require using the parity while that and the drive rebuild occurs. Is that correct? If I am incorrect here, no issues, I can run the add disk first and then upgrade disk separately, just trying to save some time as we all this process can take a while. thanks!
  5. I got my instance up and running behind my proxy and it works great. Also confirming that by adding that variable to my template manually, it works perfectly! just be aware that the value of "false" is case sensitive, so keep it all lower case and name the variable with the name from the documentation and it works as expected.
  6. I've been fighting the same issue for weeks, but didn't realize Chrome was to blame. Tried IE11 and sure enough works without issue. i can also connect via the mobile app without issue, so something is definitely up. also, interestingly enough, the Bitwarden addin for Chrome works fine, just not the web interface.
  7. So just to follow up, Bitwarden is now loading and I can get to the login and create account screen, but hitting the submit button doesn't work when the form is filled out. I can cancel without issue and if I leave a required field blank it prompts me saying to fill in the corresponding empty fields. I've deleted and recreated the container, the docker image, and deleted my appdata/bitwarden directory. I don't see any errors in the docker container log at all anymore, but I don't know where else I can look for logging as I am unfamiliar with this docker. Any assistance would be much appreciated. Thanks!
  8. ok, I deleted what I had, disabled direct i/o and it is installing now without issue, but now I cannot register an account, I get to the page, fill it out and click submit and nothing happens. Browser doesn't even indicate that anything is occuring. No errors or anything. So thats kinda odd. Any thoughts on that?
  9. its ok, I apprecaite the help. I ended up copying the file db.sqllite3 from the bit warden directory on my UD to the appdata folder and overwrote what it had tried to made. I then remapped the docker to point back and my cache drives and now it works fine. I'll do some more testing, but super odd issue.
  10. New directory on cache pool does not work. New directory on UD works fine as XFS and/or BRTFS I checked the perms on the Cache disk and that doesn't make a difference. I manually ran moved and moved off several hundred gigs of data to free up so blocks in a different part of the SSD, same issue. FS check in maintenance mode came up clean BTRFS scrube came up clean. both short and long SMART Tests passed without issue. No other errors in any other dockers, VMs, or unraid that I can see So it is something with my cache pool. but not exactly sure what. Any other suggestions I can try before I start updating firmware?
  11. It could, but i'm not seeing any other issues with anything else. Nothing in the logs, smart tests, etc. I have dual Enterprise Intel p3600 1.6tb SSDs as my cache drive where this is running. I guess I can run a check on the FS on those 2 disks, but i'd think that an error of this nature would come up in various places and not with one very specific docker image. I'm fairly new to troubleshooting docker issues though, so it could very well be. I'll take a look.
  12. I cannot get bitwarden to load at all. I've deleted and reinstalled multiple times and i'm only using default settings, so not sure what exactly is going on. Any assistance would be much apprecaited. The Docker never starts and this is all I see in the logs: First run: JWT keys don't exist, checking if OpenSSL is available...OpenSSL detected, creating keys...Keys created correctly.thread 'main' panicked at 'Can't run migrations: QueryError(DatabaseError(__Unknown, "disk I/O error"))', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace. Subsequent Runs: thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.thread 'main' panicked at 'Failed to turn on WAL: DatabaseError(__Unknown, "disk I/O error")', libcore/result.rs:945:5note: Run with `RUST_BACKTRACE=1` for a backtrace.
  13. Anyone? Performance is back to normal after mover stopped. Dockers themselves don't seem to be impacted, but they are also very slow when accessing the array. Seems to be worst when the files being moved by mover are very small, like in my attached logs, where you can see it moving the plex appdata back to my new cache drive from the array. Seems very odd, but could really use some assistance in troubleshooting as i'm not seeing much info out there.
  14. nm, update the original post, you were right, I had misclicked the file. New one is attached. hoofs-diagnostics-20180621-1608 (1).zip
  15. When mover is running, my system goes to a crawl. For instance, loading the dashboard takes about 15 seconds, changing to any other tab about the same amount of time. Accessing any shares regardless of location is also very very slow. My system is no slouch, so i'm not quite sure why this is occuring as i'm not remotely close to maxing out any of my systems specs: 2x Intel E5-2420v2 96gb DDR3 1600 ECC triple channel 1.6tb Intel P3600 NVME PCIE x4 Cache drive 3x 8tb WD Red Parity and Data drives 6x 3tb WD Red Data drives 3tb WD Red Unassigned Device everything is perfect when mover isn't working, but slows to a crawl. This occurs with no dockers or vm's running. What/where can I check to see whats going on with this? I've attached the diags from while mover had been running for a while and the UI being very unresponsive. hoofs-diagnostics-20180621-1608 (1).zip