dotsonic

Members
  • Posts

    4
  • Joined

  • Last visited

dotsonic's Achievements

Noob

Noob (1/14)

0

Reputation

  1. A previously stable container recently started reporting as 'unhealthy' during startup. The internal docker log does not report any errors and I'm able to access the containers active URL as expected. Version hasn't changed recently from what I can tell. Calling a docker inspect command I get the following output (truncated to show health status): [ { "Id": "5c374170d7afed5811cc3251f27b12de173beee2c7899a102e2e47fe9f90c426", "Created": "2024-02-04T03:17:16.701460867Z", "Path": "./entrypoint.sh", "Args": [ "resources/app/main.mjs", "--port=30000", "--headless", "--noupdate", "--dataPath=/data" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4395, "ExitCode": 0, "Error": "", "StartedAt": "2024-02-04T19:55:57.66182608Z", "FinishedAt": "2024-02-04T19:49:05.964040495Z", "Health": { "Status": "unhealthy", "FailingStreak": 16, "Log": [ { "Start": "2024-02-04T15:04:28.485001389-05:00", "End": "2024-02-04T15:04:28.52147859-05:00", "ExitCode": 1, "Output": "" }, { "Start": "2024-02-04T15:04:58.531513334-05:00", "End": "2024-02-04T15:04:58.581211192-05:00", "ExitCode": 1, "Output": "" }, { "Start": "2024-02-04T15:05:28.590011407-05:00", "End": "2024-02-04T15:05:28.634376391-05:00", "ExitCode": 1, "Output": "" }, { "Start": "2024-02-04T15:05:58.642992165-05:00", "End": "2024-02-04T15:05:58.678409309-05:00", "ExitCode": 1, "Output": "" }, { "Start": "2024-02-04T15:06:28.689170661-05:00", "End": "2024-02-04T15:06:28.725521568-05:00", "ExitCode": 1, "Output": "" } ] } }, What's the best way to troubleshoot given that the container appears to be running without issue?
  2. Thanks for pointing this out - I wasn't aware parity check didn't automatically correct errors. I now see that there's a checkbox next to the 'Check' button to 'write corrections to parity' The scheduled task is set to 'Write corrections to parity disk': At this point do you recommend that I start a manual parity check with the 'write corrections to parity option' enabled? I'm in the process of creating a diagnostics file and will post once complete. Appreciate the help!
  3. I built a new Unraid sever earlier this year and dropped in 3x10TB, and 2x6TB drives, with 1 of the 10TB drives acting as parity. The disks all formatted and worked properly. About a month later I experienced a power outage and re-ran parity after it came back online. The parity check reported 2017 errors but reported all corrected. My first scheduled/automated parity check ran earlier this week (8/1) and completed, again reporting 2017 errors. Is there a step that needs to be performed to clear the errors? Or do I have another issue (ex. bad drive)? I've run the fast SMART tests against all drives and they all came back clean.
  4. I'm in the process of evaluating UnRaid as a replacement for my aging Synology where I currently host a few FoundryVTT campaigns. For the most part, the initial setup is working with a few minor issues that I feel are more likely UnRaid related: I can't seem to save Configuration changes. For example, I would like to set a default campaign but it doesn't seem to stick. In my other Docker setup I simply set the CONTAINER_PRESERVE_CONFIG = true variable which made the admin.txt and configuration persistent between restarts. Is that not possible with this implementation? I attempted to add the variable but it doesn't seem to change the behavior. With my current setup, I have an SMB share that points to my /data directory so I can directly edit world files if necessary. I was able to set the folder permissions to allow Read/Write for my account. Is that possible using UnRaid? It appears that the permissions are reset on the /data folder (which I have pathed to /mnt/appstorage/foundryvtt) whenever the server restarts. Is there a reason to use the APP over the standard docker in UnRaid outside of the helpful pre-mapping of docker variables? I see above that the container is running as user 421. Is that why the /appdata/foundryvtt folder is listed as owned by UNKNOWN? Update 5/9/22 I decided to download the docker image directly from docker hub (https://hub.docker.com/r/felddy/foundryvtt) and configure it manually. That worked better for my system. Settings: /data: /mnt/user/appdata/foundryvtt/ FOUNDRY_USERNAME: [my account name for foundryvtt.com] FOUNDRY_PASSWORD: [my password for foundryvtt.com] CONTAINER_PRESERVE_CONFIG: true FOUNDRY_ADMIN_KEY: [initial admin password for my FoundryVTT docker] FOUNDRY_UID: 99 FOUNDRY_GID: 100 Port: 30000 -> 30000 The biggest win was configuring the container to run as the nobody account (UID:99, DIP:100) which set the correct ownership for the /data path. Now settings are saved after cycling the container. I was also able to remove my foundryvtt.com username and password after pulling the initial install (it pulled the latest version automatically). Hope that helps others.