Leaderboard

Popular Content

Showing content with the highest reputation on 06/22/22 in all areas

  1. I’ve had been experiencing permission issues since upgrading to 6.10 as well and i think i finally fixed all the issues. RCLONE PERMISSION ISSUES: Fix 1: prior to mounting the rclone folder using user scripts, run ‘docker safe new permissions’ from settings for all your folders. Then mount the rclone folders using the script. I no longer recommend using the below information, using the docker safe new permissions should resolve most issues. Fix 2: if that doesnt fix your issues, in the mount script add the following BOLDED sections to the create rclone mount section of the script, or add them to the extra parameters section, this will mount rclone folders as user ROOT with a UMASK of 000. Alternatively you could mount it as USER:NOBODY with the uid:99 gid:100 DOCKER CONTAINER PERMISSIONS ISSUES FIX (SONARR/RADARR/PLEX) Fix 1: Change PUID and PGID to user ROOT 0:0 and add an environment variable for UMASK of 000 (NUCLEAR STRIKE OPTION) Fix 2: Maintain PUID and PGID to 99:100 as USER:NOBODY and using the user scripts plugin, update the permissions of the docker containers permissions using the following script, change the /mnt/ path directory to reflect your Docker path setup. Rerun for each containers path after changing it. #!/bin/bash for dir in "/mnt/cache/appdata/Sonarr/" do `echo $dir` `chmod -R ug+rw,ug+X,o-rwx $dir` chown -R nobody:users $dir done IMPORTANT PLEX UPDATE: After running docker safe new permissions, if you experience EAC3 or audio transcoder errors where the video never starts to play, it is because your CODECS folder and/or your mapped /transcode path does not have the correct permissions. To rectify this issue, stop your plex container, navigate to your plex appdata folder path and delete the CODECS folder. Then navigate to your mapped /transcode folder if you are using one and also delete that. Restart your plex container and plex will redownload your codecs and recreate your mapped transcode folder with the correct permissions.
    3 points
  2. I have cloned and updated the linuxservio.io beta ownCloud docker. I have been using it for over a year and it works very well. I know that NextCloud is the in thing, but I am committed to ownCloud in my business and have over 200GB of media, calendars, contacts, and tasks on ownCloud. I do not want to invest time and take the risk trying to move to NextCloud. The thing I really like about this docker is that mariadb is installed in the docker so a separate docker for the database is not required. I also found other implementations of ownCloud dockers were lacking when it comes to updating ownCloud. ownCloud cannot be downgraded, so one has to be careful to always go forward, and not backwards. If a person manually updated ownCloud and then had to reinstall with the original docker, the manual upgrade got written over in other implementations. This docker prevents that situation because the ownCloud version is persistent in the appdata/owncloud folder. Anyway, the docker is available to install from CA. If you have the linuxserver.io docker installed, this is a drop in replacement for that docker. Be sure to back up your data before installing this docker. To replace the linuxserver.io ownCloud docker, remove that docker and then install this new docker from CA. Be sure to change any custom settings you made to the original docker template. Installing ownCloud from scratch will install the latest version (Currently 10.5.0 - called ownCloud X). If you've already installed the docker, your current ownCloud version will not be changed. To install the docker from scratch: Install docker and then go to the WebUI. Enter an administration user and password. Change the data folder to /data. Because the database is built into the container, the database host is localhost. The database user and the database itself are both 'owncloud'. If you do not change the default DB_PASS variable, the default database password is 'owncloud'. Once in the ownCloud WebUI, go to 'Settings->General' and click the 'Cron' method for the cron.php task. A cron to perform this is built into the docker. If you use your own certificate keys name them cert.key and cert.crt, and place them in config/keys folder. ownCloud can be updated from the WebUI, but it requires a certificate that is not self-signed and some other requirements that will be difficult for a self hosted server. I will post some manual update instructions so ownCloud can be updated and be persistent. I will be working on updates that can be done by updating the docker, but I have to put some time into how to do that without breaking things. I recommend you install some security apps for better security: OAuth2 - This app is for remote access to the ownCloud server and uses tokens rather than passwords to log into the server. Passwords are not stored locally by any clients or third party apps. Brute-Force Protection - Offers brute force login protection. Password Policy - Allows you to set password complexity rules.
    1 point
  3. Repo: https://github.com/josecoelho/unraid-templates/tree/main/cups I based my docker build on this comment of the previous thread Feedback is more than welcome. This is my first template for Unraid.
    1 point
  4. There's another thread about part of this in General Support... -It would be great if we could freely arrange the modules on the Dashboard. Specifically, moving things (like the network graph or RAM usage) horizontally between columns. - Also, having the layout stay where it's put, rather than reverting randomly when the page is refreshed would be fantastic. - Perhaps a [Save Layout] / [Restore Layout] option could be integrated?
    1 point
  5. Any chance this feature could be added in the future especially now with 802.11ax getting to near gigabit transfer speeds?
    1 point
  6. Hallo alle, Für all unsere treuen Nutzer, hol dir 30% Rabatt auf Unraid Pro Upgrades bis Ende Juli! ⤵️
    1 point
  7. Salut Communauté française Unraid, Bénéficiez de 30 % de réduction pour la mise à niveau vers Unraid Pro jusqu'au 31 juillet ! Détails de la promotion ⤵️
    1 point
  8. Tbh I can't remember anymore... Maybe have a look in here, according to my post the solution should be there Maybe my config helps too. I have no idea if it's "technically" correct, but it's working. Docker Syncthing Config: Network Type: Bridge Console Shell command: shell privileged: off Host Path2: /mnt/user/AndroidBackup --> AndroidBackup is my created share. AppData Config Path: /mnt/user/appdata/syncthing Syncthing Webui: Folderpath: /sync/FullBackup --> "FullBackup" is the folder inside your created share where the actual files are stored No real solution to your question, but I hope I can help anyway.
    1 point
  9. Hope is all i got. Ill try not to respond too quickly ;0
    1 point
  10. Also people here are helpful so probably if you take it slow and let people ask the questions they need and not rush, then probably if the data is available somehow you will get it back. It sounds super scary but if you are calm this will provide the best chance of success and it sounds like there are certainly some things to try before giving up hope
    1 point
  11. Yeah.... Tell the other half that, who's missing all of her, oh so important pictures of the sky, and shoes, and what not
    1 point
  12. Allright, ill give that a shot with the 2 failed drives i have laying abouts. They shouldn't have had anything happen to them other than what i described above. Maybe i get lucky.
    1 point
  13. Better if instead of adding it to the existing pool named cache, add a new pool, named as you wish, and assign the 1TB to the new pool. Then you will be setup similar to what I have and you can decide which pool you need for each share.
    1 point
  14. Yes, why not? But I don‘t understand why you would add —collector.systemd since Unraid doesn‘t use systemd
    1 point
  15. Let the clear finish then rebuild disk1.
    1 point
  16. Hallo zusammen, upgrade durchgeführt und und unraid neu aufgesetzt. Der smb Zugriff klappt ohne Verzögerung. Problem gelöst.
    1 point
  17. Scratch that. Uninstalling completely, rebooting, and then reinstalling completely solved it
    1 point
  18. Fixed in the next release. If the Unraid settings are spin down on a timer, that will apply to UD devices as well. You cannot set spin down per UD device.
    1 point
  19. If you need more help with this please use the existing plugin support thread:
    1 point
  20. Unassigned devices shares never show up under User Share settings. you might want to check the SMB security setting under settings->unassigned Devices. A recent change means they may now default to not being visible on the network
    1 point
  21. Steht da hinterm Ausrufezeichen. Docker wird gestartet nachdem das Array gestartet wurde. Die Frage die sich mir jetzt stellt, warum ist dein Array nicht gestartet?
    1 point
  22. Merci de me répondre si vite ^^, je lance un syslog server pour récupérer les logs au prochain freeze. Je tente aussi une maj du bios (il n'a jamais était a jour de mémoire), et pour les erreurs que tu as trouvé, elles ont toujours étaient la ... La suite au prochain freeze edit: la mise a jour du bios a résolut l'erreur: Jun 22 00:14:45 Navet kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330) Jun 22 00:14:45 Navet kernel: ACPI Error: Aborting method \_SB.PR01._CPC due to previous error (AE_NOT_FOUND) (20210730/psparse-529) a voir si ca va résoudre le problème. update: je viens de voir passer une erreur (merci le server syslog avec l'alerte par mail) qui est due (d'après d'autre personne qui ont eu la même erreur ) a `Docker custom network type` dans les paramètres docker
    1 point
  23. look at my name I have an offline backup so my main production server is my test server as well.
    1 point
  24. I didn't mean a PSU limit, but you could have a cable issue.
    1 point
  25. VLC durchsucht beim Start alle Dateien des Ordners, ob sich dort vielleicht Dateien mit Untertiteln verstecken, die zu diesem Film gehören. Dazu wird das Verzeichnis komplett durchsucht und zusätzlich alle Dateien mit den Endungen .SRT und .TXT geöffnet und zumindest der Anfang gelesen und analysiert (könnte ja ein Untertitel sein). Bei 1000 Dateien im Verzeichnis kann man 20-30 Sekunden noch als flott betrachten. Natürlich geht das deutlich flotter, wenn die Filme getrennt in eigenen Unterordnern liegen. Da sind dann nur maximal ein paar Dateien drin und so merkt man die Verzögerung dann gar nicht.
    1 point
  26. Ich hatte mal alle Varianten ohne die Einstellung zu aktivieren, getestet: https://forums.unraid.net/topic/99393-häufig-gestellte-fragen/#comment-1027380
    1 point
  27. That says it has 4 12V rails. The power is split between those and disks only get 1 of the 4
    1 point
  28. D'oh. Noobie mistake - my share had export set to 'no' under SMB security settings. Changing it to 'yes' solved my problem.
    1 point
  29. Servus @dcb machen wir den Vergleich: DS-920+ mit 8GB RAM (weniger nicht wenn du eine VM willst) kostet dich min. 650€ +funktioniert "out of the box" +du kennst dich aus dein Eigenbau grob ca. 450€ +der i3 hat deutlich mehr Power +Upgrade oder Umbaumöglichkeiten +du kannst was neues lernen +Unraid!! + tolle Cummunity😉 Falls du einen "alten" PC + USB-Stick und wenigsten eine HDD rumliegen hast, nutze doch die Testlizenz. Mich hat es damals überzeugt. Ich habe mit einer Qnap angefangen und bin über ESXi und Proxmox bei Unraid gelandet.
    1 point
  30. Could just be causing a voltage drop on that cable.
    1 point
  31. This dashboard really is the Ultimate! Thank you for putting all the time into it over the years. As I was fiddling around with it to fit my system, I found a rather minor bug (spelling error, actually). And I couldn't find where to 'officially' report a bug: In the Disk Overview section, many of the column headers read "File Sytem" instead of "File System" (missing the second 's'). I think you could do an easy find/replace in the json. Don't forget to also check the Overrides that link to it. Minor, I know... but something.
    1 point
  32. Epic milestone! Happy Father’s Day
    1 point
  33. https://wiki.unraid.net/Manual/Shares#Default_Shares
    1 point
  34. https://wiki.unraid.net/Manual/Shares#Use_Cache_.28and_Mover_Behavior_with_User_Shares.29
    1 point
  35. Each user share has settings that control whether, how and which pool they use. Here are mine:
    1 point
  36. Pools are storage outside the parity array but still part of user shares and managed by Unraid, unlike Unassigned Devices. Cache was the original "pool", now you can have multiple pools. https://wiki.unraid.net/Manual/Storage_Management#Pool_.28cache.29_Operations appdata is the default share for docker applications to keep their working data on such as any configuration the application has or database it maintains. The data the application processes would typically be on other shares. domains is the default share for VM vdisks. system is the default share for docker.img (the docker application executables) and libvirt.img (the VM configurations).
    1 point
  37. Just wanted to point out a newly added docker Time Machine based on a project whose goal is to provide stable Time Machine backups in a constantly changing environment (I'm paraphrasing). I hesitate to recommend it since I just started using it about an hour ago but it was fairly easy to get it installed and working. You might want to check it out.
    1 point
  38. Just came here to report that dead corntab link. Seems that I'm not the first one ... May I suggest to use a more generic link, like Wikipedia, instead of a private one. They are hopefully longer valid.
    1 point
  39. A @SpaceInvaderOne video on this would be awesome
    1 point
  40. On 6.9.2, experiencing the same issue. Need to "force update" the container for the changes to take effect. It is weird though, this does not seem to apply to all containers - about two weeks ago, I successfully changed the icon of my first and only dockerhub container. And yes, I tried different browser, incognito mode, etc - so no browser caching issue. Thanks for all the good work so far, I really like unraid!
    1 point
  41. I was having this problem earlier, it turns out there was a leading space at the start of the token from when I copied the token from duckdns. When I removed the leading space everything worked.
    1 point
  42. That's not really an alternative. It sounds like he misinterpreted the question and responded about wifi. Tethering might not be supported but it has nothing to do with ethernet to wireless adapters. "can I use tethering" the op asks "wireless isn't supported" is the response Tethering isn't wireless.
    1 point
  43. How can I monitor a btrfs or zfs pool for errors? As some may have noticed the GUI errors column for the cache pool is just for show, at least for now, as the error counter remains at zero even when there are some, I've already asked and hope LT will use the info from btrfs dev stats/zpool status in the near future, but for now, anyone using a btrfs or zfs cache or unassigned redundant pool should regularly monitor it for errors since it's fairly common for a device to drop offline, usually from a cable/connection issue, since there's redundancy the user keeps working without noticing and when the device comes back online on the next reboot it will be out of sync. For btrfs a scrub can usually fix it (though note that any NOCOW shares can't be checked or fixed, and worse than that, if you bring online an out of sync device it can easy corrupt the data on the remaining good devices, since btrfs can read from the out of sync device without knowing it contains out of sync/invalid data), but it's good for the user to know there's a problem as soon as possible so it can be corrected, for zfs the missing device will automatically be synced when it's back online. BTRFS Any btrfs device or pool can be checked for errors read/write with btrfs dev stats command, e.g.: btrfs dev stats /mnt/cache It will output something like this: [/dev/sdd1].write_io_errs 0 [/dev/sdd1].read_io_errs 0 [/dev/sdd1].flush_io_errs 0 [/dev/sdd1].corruption_errs 0 [/dev/sdd1].generation_errs 0 [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 All values should always be zero, and to avoid surprises they can be monitored with a script using Squid's great User Scripts plugin, just create a script with the contents below, adjust path and pool name as needed, and I recommend scheduling it to run hourly, if there are any errors you'll get a system notification on the GUI and/or push/email if so configured. #!/bin/bash if mountpoint -q /mnt/cache; then btrfs dev stats -c /mnt/cache if [[ $? -ne 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i warning -s "ERRORS on cache pool"; fi fi If you get notified you can then check with the dev stats command which device is having issues and take the appropriate steps to fix them, most times when there are read/write errors, especially with SSDs, it's a cable issue, so start by replacing the cables, then and since the stats are for the lifetime of the filesystem, i.e., they don't reset with a reboot, force a reset of the stats with: btrfs dev stats -z /mnt/cache Finally run a scrub, make sure there are no uncorrectable errors and keep working normally, any more issues you'll get a new notification. P.S. you can also monitor a single btrfs device or a non redundant pool, but for those any dropped device is usually quickly apparent. ZFS: For zfs click on the pool and scroll down to the "Scrub Status" section: All values should always be zero, and to avoid surprises they can be monitored with a script using Squid's great User Scripts plugin, @Renegade605created a nice script for that, I recommend scheduling it to run hourly, if there are any errors you'll get a system notification on the GUI and/or push/email if so configured. If you get notified you can then check in the GUI which device is having issues and take the appropriate steps to fix them, most times when there are read/write errors, especially with SSDs, it's a cable issue, so start by replacing the cables, zfs stats clear after an array start/stop or reboot, but if that option is available you can also clear them using the GUI by clicking on "ZPOOL CLEAR" below the pool stats. Then run a scrub, make sure there are no more errors and keep working normally, any more issues you'll get a new notification. P.S. you can also monitor a single zfs device or a non redundant pool, but for those any dropped device is usually quickly apparent. Thanks to @golli53for a script improvement so errors are not reported if the pool is not mounted.
    1 point
  44. Try this From a windows command prompt (run as administrator) mklink /d "c:\WhateverFolderYouWantItCalled" "\\unRaidServer\unRaidShareName" Your share will wind up being mounted within that folder on an existing windows drive Should be close enough to what you need.
    1 point