3doubled

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

4 Neutral

About 3doubled

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed
  1. Try: sudo -u abc php /config/www/nextcloud/occ maintenance:mode --off EDIT - path was incorrect before, fixed it. You can also change the maintenance value variable in config.php located at /appdata/nextcloud/www/nextcloud/config/ and reboot the docker.
  2. The spikes during upload transfer seem to be related to mysql performance. This person did some testing and was able to improve transfer speeds, especially for small files. The issue is that the biggest improvement comes from a change that is inherently less safe, so this is not a great option. Here is a 2.5 GB file being uploaded: This is the default with innodb_flush_log_at_trx_commit = 1 This is with innodb_flush_log_at_trx_commit = 2 This is with innodb_flush_log_at_trx_commit = 5 No other innodb setting change suggested in forums ha
  3. I know this is an old thread, but I just wanted to add that excluding the \appdata folder in Dynamix Cache Directories and setting the max level depth to 8 greatly reduced the CPU spiking behavior for me.
  4. I know this is an old thread, but I just wanted to add that excluding the \appdata folder in Dynamix Cache Directories and setting the max level depth to 8 greatly reduced the CPU spiking behavior for me.
  5. Bingo, that was it. Thanks johnnie.black! Now that I know what is causing it I see there are other threads about this here: This thread in particular people recommend either disabling cache dirs or at least excluding shares with complex folder structures, such as \appdata. I'm going to give the latter a try first, but disabling it might not be a bad idea, I don't spin my disks down ever, so cache directors is probably not doing much anyway. [EDIT] Excluding \appdata seems to have done the trick. I also set the max folder dep
  6. Hi, While trying to diagnose performance issues related to my NextCloud docker I came across a weird CPU utilization pattern. The image below was taken with all dockers stopped except Netdata: Every couple of seconds, one core is utilized 100%. How can I identify exactly what process is doing this? Unfortunately, Netdata just labels this process as "other". Thanks PS. Here is my post regarding my NextCloud performance issue, although I don't think it seems to be related to this weird CPU behavior. tower-syslog-2019
  7. Thanks for the suggestion ufo56. I had actually tried implementing all of your suggestions a few days ago, but the Redis changes crashed NextCloud and caused an "Internal Error" message in the docker UI (I realized my mistake and noted it below). This time I went back and added your changes individually. All testing was done on an gigabit LAN either uploading to my nextcloud share, which writes to a SSD cache, or downloading to an SSD: The upload baseline is the following, I see peaks up to 200 - 260 mbps (~30 MB/s) during upload, but transfer is not very s
  8. Hi all. I'm a new NextCloud user and I followed Spaceinvader One's great video tutorial (Thanks Spaceinvader One!). I managed to setup NextCloud and the reverse proxy OK (it's functional), but I'm seeing speeds between KB/s and low MB/s speeds when either uploading or downloading files to/from NextCloud on a gigabit local network. If I copy the same files using file explorer in Windows, I hit 80 - 90 MB/s when uploading to the Unraid server shares directly (no cache). When I Google around, I see people reporting NextCloud speeds in the 80 - 90 MB/s range, so it seems something is w
  9. Thanks everyone for the help, especially aptalca, I found the backups and was able to restore to a previous date and the library is now updating. Thankfully Plex was automatically creating those backups. I'll checkout trakt.tv too. Cheers.
  10. Thanks. Is there no way to just fix the corruption? I don't have a backup stored. If not, then how could I cause Plex to rebuild the database? Should I just delete the appdata and start again like I did last time. That's an ok fix, but kind of annoying since I lose all of the watched/unwatched info, which is what I'm hoping to avoid. Thanks
  11. I hope everyone is having a great weekend. I'm having trouble with my Plex server docker where it refuses to update the library. I've added several files to the watched folders, but they are all ignored. This occured around the time that my cache disk filled up, and I think it might be connected, as this happened a few months back and the only way I could solve the problem was to rebuild the plex server. Thus far, I've tried the scan, optimize, clean, empty trash options in the Plex server GUI. I've tried refreshing metadata (no luck so far). I've deleted the docker ima
  12. A quick update. I later booted the server and found 4 (!) disks missing. It turned out that all 4 disks were connected to my motherboard via SATA cables, while the drives connected via SAS to SATA cables from my supermicro card were not seemingly affected. I considered it was either a SATA controller or SATA cable issue. I tried replacing all of the SATA cables with older ones, all of the drives appeared again, no issues!!! It wasn't a single cable, it was ALL of my cables. I replaced all of my SATA cables about 6 months ago because I had one bad cable and thought I could probably
  13. It is a Corsair TX650W, which has been powering my server without trouble for years.
  14. So after a long SMART test of my parity drive, no new errors were found. Otherwise, little new to report, other than Disk11 throwing the same bunch of link reset errors when being accessed. I've attached the latest diagnostic report. So how can I recover my array? - I don't feel like I can successfully rebuild the parity drive. If I attempt this, Disk11 will probably continue to throw link reset errors until Unraid boots the drive out of the array and/or the parity drive becomes invalid again. - I can RMA Disk11, but without a valid parity, I can't rebuild Disk11. I se
  15. So the reiserfsck --check test of disk11 returned fine (see attached). I ran a short SMART test of the parity and disk11. Disk11 ( is clean. For the parity drive, there was a "command timeout" row highlighted in yellow with a raw count of 1, but that doesn't strike me as worrisome (I've attached the SMART reports, although I guess they are included in the diagnostic I added below). After this, I decided to run a long SMART test on both drives, just to stress them a bit and see if any other errors might be produced. A couple minutes in I could hear the same disk click/retry sound. I