• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

norbertt's Achievements


Newbie (1/14)



  1. Does anyone use the Homeassitant docker from the linuxserver repo? It is good?
  2. I have wireguard and if Iam not at home the wireguard app boot up automatic with tasker.
  3. How should I manage that from Unraid? Thank you
  4. Hello all, There is a way to use bitwarden local only? I dont want to use with reverse proxy. I have wireguard and its good for me. At the moment I can use bitwarden from android app, but I can not login in with chrome becouse of the https. Any help or tips? Thank you
  5. Hi, There is a way to setup a network speed limit for dockers? I have a usenet and torrent docker and I would like to give them all example a 100 kb/s for download speed all together
  6. Hi guys I have a few questions, not specified to docker but maybe someone can help me. For zmninja notification do I really need to make a portfoward? What if I want to use with VPN? I installed hook, YOLO 4 ,4 light and face. But I don't seen any settings on the webui. Where and how can I setup the camera to make a record trigger for a 4k CCTV? Thanks
  7. Thank you, what about motion detection? So the 10600k would be good for me, if the unraid is supporting this cpu
  8. I had a same issue. Please delete the deemix docker and the config too
  9. 🤔 did I ask a wrong question or this CPU is just too new?
  10. Hi guys I am thinking to upgrade my Unraid Current conf: ASRock H270M-ITX/ac Intel® Core™ i5-7500T CPU @ 2.70GHz Current usage: Plex (local use), Sonarr, Radarr, deluge and nbz, hassio, and some other docker. I will need a Windows 10 VM for Blue Iris and 8 IP cam for motion detection and AI.(face recognition, object recognition) (I will add a used GPU for extra power) The Intel 10600 (k) would be a good choice? This CPU is enough for my usage? Can handle it more? Thank you
  11. @SpaceInvaderOne i think this docker is a bit out of date : Would you mind to check please? Its really good sw and handy unraid temp
  12. Do you know better solution for a unmount? Sometimtes I need to shut down the array and example in the weekend I will upgrade my hw setup. So sometimes would be nice to have a script for a clean unmount.
  13. so do I need a mariadb docker to use shinobi pro docker? I believe it's included in shinobi pro docker.
  14. @DZMM, Sorry to bother you, but I am stuck and its getting to be painfull. I need you help:) There is my config: [gdrive] type = drive client_id = -... client_secret = - scope = drive token = team_drive = [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = -- password2 = There is my mount script: RcloneRemoteName="gdrive_media_vfs" RcloneMountShare="/mnt/user/mount_rclone" MergerfsMountShare="/mnt/user/mount_mergerfs" DockerStart="nzbget plex sonarr radarr deluge" LocalFilesShare="ignore" MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="" # My unraid IP is so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them There is my upload: # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="ignore" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="off" BWLimit2Time="08:00" BWLimit2="off" BWLimit3Time="16:00" BWLimit3="off" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y I am not sure so I ask, do I need the upload script if I want to use the MergerfsMountShare="/mnt/user/mount_mergerfs" for anything? anyhow I mapped my dockers to /user -> /mnt/user I got a problem with a downloads Nzbget main folder is: /user/mount_mergerfs/gdrive_media_vfs/downloads If I mapped the app to this folder the all process is very slow and I recive this error all the time: Could not create file /user/mount_mergerfs/gdrive_media_vfs/downloads/intermediate/The.Christmas.Bunny.2010.1080p.AMZN.WEBRip.AAC2.0.x264-FGT.#1/61.out.tmp: (null) I recive similar permission problem with deluge.. What am I missing? I been the github page for a clean guide but I think my setup is correct. thank you