Jump to content

Niklas

Members
  • Content Count

    204
  • Joined

  • Last visited

  • Days Won

    1

Niklas last won the day on November 11 2018

Niklas had the most liked content!

Community Reputation

31 Good

About Niklas

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Stockholm/Sweden

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just want to say that I used this script and it worked as expected. Thanks!
  2. I add some trackers using the "Automatically add these trackers to new downloads" but that setting and field reset when restarting the container. In qBittorrent.conf: Bittorrent\AddTrackers=true Bittorrent\TrackersList=udp://tracker.example.com:669/announce\nudp://tracker2.example.com:699/announce After container restart Bittorrent\AddTrackers=false Bittorrent\TrackersList= Edit: This could of course be a problem with qbittorrent itself. hmm.
  3. I would guess this container is missing the required dependencies to make the built in version work. Also guessing it has something to do with Alpine.
  4. I see lots of data written but not only limited to the docker.img file and it's loopback device. It has something to do with the cache in some way. In my later answers in that thread, I talk about mariadb causing 12-20GB of written data every hour. That's extreme with my light use of nextcloud and home assistant (with mariadb as db). My databases are like 100MB in total. The 12-20GB/h just disappears in som black hole. It's a silent SSD killer.
  5. Yes. I tried three different locations. /mnt/cache/appdata, /mnt/user/appdata (set to cache only) and /mnt/user/arraydata (array only). The two first locations generate that crazy 12-20GB/h. On array, it was like 10x+ less writing. The loop device also does some writing I find strange, yes. Wrote about this before noticing the high mariadb usage.. I will read your bug report and answers. Edit: this is just by keeping an eye on mariadb specifically. Other containers writing to the appdata dir will probably also generate lots of waste data including the loopback docker.img.
  6. loop2 is the mounted docker.img, right? I have it pointed to /mnt/cache/system/docker/docker.img What did you change in your rc.docker?
  7. Running MariaDB-docker pointed to /mnt/cache/appdata/mariadb or /mnt/user/appdata/mariadb (Use cache: Only) generates LOTS of writes to the cache drive(s). Between 15-20GB/h. iotop showing almost all that writing consumed by mariadb. Moving the databases to array, the writes goes down very much (using "iotop -ao"). I use MariaDB for light use of Nextcloud and Home Assistant. Nothing else. This is iotop for an hour with mariadb databases on cache drive. /mnt/cache/appdata or /mnt/user/appdata with cache only: When /mnt/cache/appdata is used, the shfs-processes will show as mysql(d?). Missing screenshot. This is iotop for about an hour with databases on array. /mnt/user/araydata: Still a bit much (seen to my light usage) but not even near the writing when on cache. I don't know if this is a bug in Unraid, btrfs or something but I will keep my databases on array to save some ssd life. I will loose speed but as I said, this is with very light use of mariadb.. I checked tree different ways to enter the path to the location for databases (/config) and let it sit for an hour with freshly started iotop between the different paths. To calculate data used, I checked and compared the smart value "233 Lifetime wts to flsh GB" for the ssd(s). Running mirrored drives. I guess other stuff writing to the cache drive or share with cache set to only will have unnecessary high writes. Sorry for my rumble. I get like that when I'm interested in a specific area. Not native english speaker. Please, just ask if unclear. Edit: My docs On /mnt/cache/appdata/mariadb (direct cache drive) 2020-02-08 kl 22:02 - 23:04. 15 (!) GB written. On /mnt/user/arraydata/mariadb (user share on array only) 2020-02-08 kl 23:04-00:02. 2 GB written. On /mnt/user/appdata/mariadb (Use cache: Only) 2020-02-09 kl 00:02-01:02. 22 GB (!) written. Just ran this again to really see the differense and loook attt itttt. On /mnt/user/arraydata/mariadb (array only, spinning rust) 2020-02-09 kl 01:02-02:02. 4 GB written.
  8. Change both PUID and PGID to 0 in the settings for the container.
  9. "php7" missing in the command. sudo -u abc php7 /config/www/nextcloud/occ db:convert-filecache-bigint
  10. For ONLYOFFICE. With Hub (18) you install the ONLYOFFICE connector and after that, search for "Community Document Server" within Nextcloud - Apps and install that. https://apps.nextcloud.com/apps/documentserver_community Does not work for me. Only gives me "Community document server is not supported for this instance, please setup and configure an external document server" Tried chmod for the files mentioned here https://github.com/nextcloud/documentserver_community/issues/10 with no luck. Guess I have to wait and see how this develops.
  11. And you are only changing the host port? Not the container port?
  12. If using the link to open the ui from Unraid you may need to change the port in the template (turn on advanced view) Edit: you only changed the host port, not the container port, right?
  13. Add "--hostname=xxxxxx" in "Extra Parameters" for the container ("ADVANCED VIEW" on)
  14. The deconz plugin need the request library to work. Could it be added to the container please? 2019-12-19 21:04:57.206 Error: (Deconz) Your pyton version miss requests library 2019-12-19 21:04:57.206 Error: (Deconz) To install it, type : sudo -H pip3 install requests | sudo -H pip install requests Edit: "pip3 install requests" in container will make the plugin work. Guess it will work until any re-installation of the continer?