Jump to content

Vaseer

Members
  • Posts

    106
  • Joined

  • Last visited

Posts posted by Vaseer

  1. I would like set up Nextcloud with SSD for cache (small and frequent accessed/changed files) and HDD in array for large files.

    I have searched this forum and used my Googling skills, but didn't find anything for my planned configuration.

     

    My current Nextcloud configuration is using 2x 6TB HDDs, mounted in BTFRS RAID1 as unassigned devices.

    After ~2 years only 10% of available space is used, mostly by family pictures and some other large and rarely accessed files.

     

    My plan is to use 2x 1TB SSDs (or 2TB) in cache pool RAID1 and 1x 6TB HDD in array.

    • SSD for small and frequent accessed/changed files.
    • HDD for large and rarely accessed files.

     

    Is this doable with native unRAID/Nextcloud options?

  2. I would like to double check my calculations for planned UPS.

     

    My unRAID server:

    • PSU: Seasonic X-Series X-650 KM3 650W ATX 2.3 (SS-650KM3)
    • CPU: AMD Ryzen 7 2700
    • GPU: none
    • Drives:
      • M2 SSD: 1
      • SATA SSD: 2
      • HDD: currently 10 (max 15)
      • HBA: LSI 9211-8ihb

     

    Use case: NAS storage for Kodi (no transcoding done on unRAID) and personal files. Docker and VM platform.

    VM: currently none, planned is OPNsense and optionally 1 or 2 more.

    Docker: Nextcloud, UniFi controller, YT downloader, PiHole... nothing too power hungry.

     

    My unRAID is located in rack with other network gear and some of it I would also like connect to UPS.

    Peak power consumption measured in past couple months (during this time there were several parity checks, so this is included) of all devices in rack is 255W (and power meter reports Power factor 0.8).

     

    I am planning to add up to 5 more HDDs and 2 or 3 VMs, so power draw will increase, but I think it shouldn't go over 300W peak power, since not all devices will be connected to UPS.

     

    I was looking for UPS with 500W (or more) and (if my calculation is correct) 625VA. If I understand correctly, this is max power draw that can UPS handle on both - battery protected and unprotected - outputs?

     

    I found APC BX1600MI-GR and looks interesting, especially with more W/VA and longer run time.

    My plan is to run unRAID no longer than 3-5 minutes on UPS. If power isn't back within that period, it is major problem or planned power disconnect.

     

    I appreciate any feedback, so I will know if I am looking in right direction.

    Please let me know if I missed anything.

  3. For couple of days I have problem with Duplicati backing up my data to Office 365 One Drive. I am getting error in Duplicati UI:

    Failed to connect: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati System.Net.HttpWebResponse { "error": { "code": "accessDenied", "message": "Database Is Read Only", "innerError": { "code": "serviceReadOnly", "date": "...", "request-id": "...", "client-request-id": "..." } } }

     

    and over mail notification:

    Failed: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati:/children
    System.Net.HttpWebResponse
    {
      "error": {
        "code": "accessDenied",
        "message": "Database Is Read Only",
        "innerError": {
          "code": "serviceReadOnly",
          "date": "...",
          "request-id": "...",
          "client-request-id": "..."
        }
      }
    }
    Details: Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati:/children
    System.Net.HttpWebResponse
    {
      "error": {
        "code": "accessDenied",
        "message": "Database Is Read Only",
        "innerError": {
          "code": "serviceReadOnly",
          "date": "...",
          "request-id": "...",
          "client-request-id": "..."
        }
      }
    }
      at Duplicati.Library.Main.BackendManager.List () [0x00049] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, System.Boolean updating, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00084] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run (System.String path, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00037] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal (Duplicati.Library.Utility.IFilter filter) [0x000ba] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Operation.RepairHandler.Run (Duplicati.Library.Utility.IFilter filter) [0x00158] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Controller+<>c__DisplayClass18_0.<Repair>b__0 (Duplicati.Library.Main.RepairResults result) [0x0001c] in <e60bc008dd1b454d861cfacbdd3760b9>:0
      at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011c] in <e60bc008dd1b454d861cfacbdd3760b9>:0

     

    Duplicati is on latest version (updated during the weekend; don't have information about version and/or access to server right now).

    Problem is only with Office 365 One Drive. Other locations (example Mega) are working ok.

     

    From what I read in error text I think that problem is on One Drive side, but I am posting here if anyone had problem like this before.

    Regarding One Drive:

    - I can login to One Drive and upload data manually.

    - I have regenerated login key/token for Duplicati and problem persists.

     

    Any help and advice is appreciated!

    Thank you!

  4. On 1/17/2021 at 3:17 PM, tillkrueger said:

    I am also now getting the same error message as you, discojon, when starting the WebGUI of my Nextcloud Docker:

    This version of Nextcloud is not compatible with > PHP 7.3.
    You are currently running 7.4.14


    would you be able to let me know how exactly can I get it to work again?

    Where exactly is this config folder where you edited versioncheck.php?

    I had the same problem. My solution was manual upgrade to version 18.0.13 (from 17.0.x).

     

    I did as described in first post of this thread, option 3. Manual upgrade using occ.

    All commands are the same, except

    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config

    must be replaced with

    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-18.tar.bz2 -P /config

     

    Reason for this is that you can't skip major versions (i.e. if your Nextcloud instance is on version 17.x, you must first upgrade to version 18.x).

     

    Hope it helps.

  5. On 11/17/2019 at 10:38 PM, Vaseer said:

    Today I was uploading some files to my Nextcloud and one of them was 6 GB of size.
    When I was uploading this file, I got email notice from unRAID server "Docker image disk utilization of 72%". When the upload finished, I got new email notice: "Docker image disk utilization returned to normal level".

     

    This made me curious of how does file transfer or rather file write to Nextcloud storage works?
    I always thought that files are written directly to Nextcloud HDDs. But it seems that files are initially stored to Nextcloud Docker instance which is on cache SDD and then written to Nextcloud storage HDDs.

     

    Is this proper way for file transfer to Nextcloud or did I do something wrong with my configuration?

    I didn't get any information (searched unRAID forum and Googled it) to answer this question so I am asking again and adding some new test results.

    Uploading ~5GB test file to Nextcloud over local network:

    Before upload:

    BeforeUpload1.png.01d8eca908d4ee4b14f6f6b617a912f9.png BeforeUpload2.png.1cc95a0e4b0092acdaae111fd343b913.png

    BeforeUpload3.png.4d6d92337513d75dece9303b15f5b0be.png

     

    Couple seconds after finished upload:

    SecondsAfterFinnishedUpload1.png.2206851fd4a0240ebe21732723c58ccf.png SecondsAfterFinnishedUpload2.png.b51ff2834e678cedbfe5ef9885db9e58.png

    SecondsAfterFinnishedUpload3.png.0f55e23693d29e20b5bcce4d1bc89ef1.png

     

    Is this normal/correct way of uploading files to Nextcloud?

    NC container mappings:

    /data <--> /mnt/disks/nextcloud/nextcloud-data/

    /config <--> /mnt/cache/docker/appdata/nextcloud

     

    unRAID version: 6.6.6

     

    Edit: Found it!
    This only happens when I upload files from my Fedora PC via webdav/dav connection (davs://[email protected]/remote.php/webdav).
    If I upload file via browser, docker.img size doesn't change.

    Is this bug or expected behavior?

  6. I have mappings for all containers configured and this happens when I upload ~5 GB file to Nextcloud:

    Before upload:

    BeforeUpload1.png.01d8eca908d4ee4b14f6f6b617a912f9.png BeforeUpload2.png.1cc95a0e4b0092acdaae111fd343b913.png

     

    Seconds after finished upload:

    SecondsAfterFinnishedUpload1.png.2206851fd4a0240ebe21732723c58ccf.png SecondsAfterFinnishedUpload2.png.b51ff2834e678cedbfe5ef9885db9e58.png

     

    After 10-20 seconds values return to same state as before upload.

     

    Is this container specific (if so, I will ask in NC thread) or could be something wrong with my docker configuration?

    My unRAID version is 6.6.6

  7. To clarify question: I am using Transmission container, which is using mapped volume /downloads <--> /mnt/disks/downloads for transferred data.
    docker.img is on SSD cache drive, /mnt/disks/downloads is unassigned HDD.
    In Transmission I see cumulative DL/UL data size of ~10 TB, which, if my calculations are correct, corresponds to cache SSD S.M.A.R.T. attribute "246 Total host sector write" which is 21877021800. SSD Sector sizes: 512 bytes logical, 4096 bytes physical.

     

    In addition to Transmission's data I also saw docker.img size increase when I uploaded some large files (couple of GB) to Nextcloud.

  8. 39 minutes ago, Abigel said:

    Hi,

    is there an actual path to upgrade nextcloud to 17 via the cli ?

    I have only found old manuals that doesn't worked

    This still works.

    Instead of

    docker exec -it nextcloud bash

    you can use command

    docker-shell

    and you will get list of all Docker containers. Press corresponding number next to Nextcloud Docker and you will access Nextcloud shell.

    All other commands are still the same for this version of NC.

    • Thanks 2
  9. Today I was uploading some files to my Nextcloud and one of them was 6 GB of size.
    When I was uploading this file, I got email notice from unRAID server "Docker image disk utilization of 72%". When the upload finished, I got new email notice: "Docker image disk utilization returned to normal level".

     

    This made me curious of how does file transfer or rather file write to Nextcloud storage works?
    I always thought that files are written directly to Nextcloud HDDs. But it seems that files are initially stored to Nextcloud Docker instance which is on cache SDD and then written to Nextcloud storage HDDs.

     

    Is this proper way for file transfer to Nextcloud or did I do something wrong with my configuration?

  10. I have set transcoding temporary path to /ramtranscode (mapping it to /tmp in docker config), restarted the Emby docker and Kodi client but the problem still persists.
    I have changed one of the users' Vero 4K configuration to use Emby in native mode (not add-on) and all video files are working fine.

     

    Was reading around and got information, that transcoding is done only when Emby add-on on client (Vero, Kodi) send information to server, that it can not play original file.
    Most of "problematic" files are in MKV HEVC format. The most interesting part is that in native mode (with or without using Emby on client) all video files are working fine. Where/how does Emby add-on get information, that client can't play original file?

     

    I can't say for certain but something has had to change with Emby (server or client add-on), because I noticed same problem with video files that was working fine 1 or 2 months back.

     

    Setting "Enable hardware acceleration when available" to No doesn't resolve the problem.
    For all Emby users (on server) I have same Media Playback configuration:
    YES - Allow media playback
    NO - Allow audio playback that requires transcoding
    NO - Allow video playback that requires transcoding
    YES - Allow video playback that requires conversion without re-encoding

     

    If I have correct information, there is no way to completely disable transcoding on Emby (to always stream original file)?

  11. I am having problems with playback via Emby. In random time intervals (5 to 10 minutes) video stutters and then video and sound jumps back for about 30 seconds and keeps playing. Subtitles are displaying as nothing has happened and are not matching video anymore.
    This is not with all videos but just some of them (random), mostly movies (TV shows are OK - for now). If I restart video from beginning the problem keeps repeating but always in different time interval.
    Playback is done via Kodi on different devices (Vero, Chromebox, Windows PC, Ubuntu PC). Devices were on different Kodi versions (17.x and 18.x) and problem was on all of them. I have updated Kodi on all devices to latest version but problem persists.
    On Kodi I am using Emby add-on with "playback via add-on" (not native mode).
    If I play same videos (copy of the same file) via Kodi native mode, everything works fine.
    Problematic videos are on different discs in array, parity check and SMART are without errors. If I play problematic videos via VLC on PC (same file that is used via Emby), it works fine.
    EmbyServer docker is on latest version. I have restarted docker and unRAID server but nothing helps.
    I got information from other users that this problem has started to appear about 2 weeks ago.

    Any suggestions what should I check/try. Do you need any logs?

  12. Transmission, Extra Parameters not applied to container.

    I am trying to set Extra Parameters --dns=208.67.222.222 and when I hit Save or Apply I get response:

    Command:
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='transmission' --net='br0' --ip='10.0.0.32' -e TZ="Europe/Budapest" -e HOST_OS="Unraid" -e 'TCP_PORT_9091'='9091' -e 'TCP_PORT_51413'='51413' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/disks/downloads/':'/downloads':'rw,slave' -v '/mnt/disks/downloads/transmission-watch/':'/watch':'rw,slave' -v '/mnt/cache/docker/appdata/Transmission/':'/config':'rw' --dns=208.67.222.222 'linuxserver/transmission'
    The command finished successfully!

    But when I check container config via WebUI, Extra Parameters field is empty.

    I have problem with Transmission container only.

     

    unRAID 6.6.5

    Transmission docker container: latest (updated minutes ago)

  13. 6 minutes ago, bobbintb said:

     

    Ombi has been completely rewritten with v3 and a new setup is needed. Not sure how you missed that one. We've been talking about it for the last few pages.

    Well... the most obvious things are the easiest to miss for me... :D

    Thanks for the info.

  14. Hi. I have problem with Ombi. After last update of Ombi docker container on March 26th, it is asking me for initial setup of Ombi. I had Ombi configured and working for couple of months now and everything was fine. After last update it seems like Ombi has restored itself to factory defaults, with all configuration and data gone...

    I am using plugins Backup/Restore Appdata and Auto Update Applications for weekly backups and updates of appdata and all docker containers. I have tried to restore Ombi configuration (copying ombi folder form backup .tar.gz file to appdata with right permissions on folders and files) from several backups, but it is not working. Ombi keeps asking me for initial setup.

     

    Any idea why restore is not working? Where does Ombi save it's data (Ombi.db and/or Ombi.sqlite?) and config?

  15. 3 minutes ago, Squid said:

    This.  Custom stop script is executed prior to stopping the containers.  Similarily, the custom start script is executed after restarting the containers

    I was missing information regarding stopping/starting containers. Thanks.

     

    I know this is off topic, but I am confident you can help me :)

    I made script that creates tar.gz file. When I look at the created tar.gz file via terminal I see "normal" name file name - as I configured it in script: "tree_2017-12-30_22:53:19.tar.gz", but when I look at it via file manager (Dolphin, Krusader, Nautilus...) I see file named "TS76AZ~8.GZ".

     

    Part of the script that makes tar.gz file:

    tar -czvf $DESTINATION/tree"$(date "+_%Y-%m-%d_%H:%M:%S").tar.gz" tree

    I can access and extract tar.gz via terminal with no problem, but via file manager it is not working.

     

    Any advice what I am doing wrong?

  16. Are options "Path To Custom Stop Script" and "Path To Custom Start Script" mixed up or am I understanding them wrong?

    Under "Path To Custom Stop Script" I linked script that I want to execute after Backup/Restore has run and under "Path To Custom Start Script" I linked script that I want to execute before Backup/Restore will run. When I switch them all scripts/actions are performed as expected.

  17. I suspect EmbyServer docker. Since my media shares spread across all 3 data disk, it probably does media scan/update when it is turned on again and spins up all data disks.

     

    Request: can you add option to choose number of backups that are kept in backup location before they are deleted? Option "Delete backups if they are this many days old" is a little "dangerous", because it can delete all backups if server was off for a long time. Not long ago my server was off for almost 7 months, but luckily I remembered to turn off option for deleting old backups or all backups would be deleted.

×
×
  • Create New...