tidusjar

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by tidusjar

  1. this sounds like the issue people have when they encounter the SQLite database locking. But there should be some logs in the logs folder that mention that Ombi cannot access the database.
  2. All users that are running a MySQL setup that i've spoken to are running all three databases under MySQL and not a split. The migration guide only really will migrate you from all 3 SQLite databases to MySQL and not just two. It might be worth reaching out on the Ombi discord.
  3. As I mentioned previous I wouldn’t recommend running Ombi with different databases. But that’s your choice. regard radarr, check your log file, ensure you have enabled it in the settings and ensure you have set a default availability
  4. are you sure Ombi is looking at the correct MySQL database? Maybe jump on the Ombi support discord. Might be easier to help there
  5. And it's happened again this morning, docker containers are failing to start (and have stopped). New drive I guess? server-diagnostics-20220615-0849.zip
  6. I only did the SATA cable. I've now switched SATA ports and different power cable. It now seems to be working, i'll check over the next few days
  7. Just replaced the cable, started back up and things are running for now. But like i mentioned it keeps happening. Update: Actually things are behaving quite strangely like nothing can write to the cache now server-diagnostics-20220614-1452.zip
  8. Any sort of tests I can do to see what's wrong?
  9. After restarting I now have the following: Server logs are attached for this instance server-diagnostics-20220614-1401.zip
  10. So this issue has been happening on and off for the past few months. What seems to happen is that all of a sudden my docker containers would just become unresponsive and not work. Stopping or Restarting the containers would show an error message in the UI saying something along the lines of 'Service failed to start' with no other information. Sometimes a reboot of the server would fix this, other times i'd need to disable docker, delete the `docker.img` and reinstall the containers. Today this has happened again and seems to be becoming more frequent, so i'm hoping someone is able to point me in the right direction of what I can do to resolve this. Server diagnostics are attached, currently the `Docker Service failed to start.` at this point i'm probably going to have to delete the docker image and start again. If there is any other information I can provide please let me know. server-diagnostics-20220614-1334.zip
  11. I think it’s just they have enforced the use of App Passwords for this sort of thing
  12. There is, I don’t always see the messages on here. I suggest you jump on the Ombi discord and we can try and assist you with your proxy issue
  13. Hey, You have run into a limitation in SQLite, Ombi is a very DB heavy application and SQLite is not really meant for websites/apps. What i'd recommend you do is migrate over to using MySQL as a database for Ombi (you can easily spin up a MariaDb docker container) There is a guide on the docs website, also see here: https://docs.ombi.app/info/alternate-databases/#why-mysql
  14. Oh wow, I unassigned the drive, re-assigned it and now it's mountable and everything seems normal?
  15. Done, yeah it was XFS I had to use the `-L` option as it was complaining Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 0 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (213:549485) is ahead of log (1:2). Format log to cycle 216. done
  16. Sync finished with 0 errors. For some reason there's no File System Check available for disk1 now. The option is completely missing. It appears for all of the other disks Here's a screenshot of Main
  17. Ok will do, As far as I'm aware disk4 is now in a correct state and no longer emulated too
  18. Filesystem check output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. And Parity sync is now running
  19. Ok So replace disk1 then? How do i do a new config to disk4? Is there a link on the wiki anywhere?
  20. I do not currently have any spare. Do we think disk 1 is dead?