Leaderboard

Popular Content

Showing content with the highest reputation on 01/12/21 in all areas

  1. I talked to @StevenD and dropped back to UnRAID version 6.9.0-beta35 like he's running, and this issue is resolved
    3 points
  2. Thanks so much for making this. I finally got it all set up and working the way I want it to. Figured out how to compensate for the non-server hardware without the api. A couple of issues I ran into along the way. Using the telegraf:latest, there were no instructions on adding lm_sensors. the apk add command doesnt work for it. But adding apt-get -y install lm-sensors did seem to work. Also, after setting up varken and the plex monitoring stuff, I noticed that two of the panel types were not included with grafana by default, so they showed an error. It was the pie chart panel and the worldmap panel plugin. They were easy to install just by going to the grafana console and typing. grafana-cli plugins install grafana-piechart-panel grafana-cli plugins install grafana-worldmap-panel Once I did that they worked just fine.
    2 points
  3. @bellyup, please read the red text in the first post.
    2 points
  4. For everyone who needs to mount an apfs volume Open terminal or ssh into server and work through these steps: mkdir -p /tmp/apfs && cd /tmp/apfs wget https://github.com/ich777/apfs-fuse/releases/download/v20200708/fuse3-3.10.1-x86_64-1.txz wget https://github.com/ich777/apfs-fuse/releases/download/v20200708/apfsfuse-v20200708-x86_64-1.txz installpkg fuse3-3.10.1-x86_64-1.txz installpkg apfsfuse-v20200708-x86_64-1.txz rm -rf /tmp/apfs Now, all dependencies are available. The following steps show an example with an qcow2 image. This requires an additional step: qemu-nbd --connect=/dev/nbd0 /path/to/your/image.qcow2 You can now mount your apfs volume with: losetup -r /dev/loop8 /dev/nbd0p2 mkdir /mnt/disks/apfsvolume apfs-fuse /dev/loop8 /mnt/disks/apfsvolume Thanks to @ich777 for the files!
    1 point
  5. Hi All, I'm having some issues recently with my cache drives and BTRFS errors. I think (again not 100% certain) its caused by a windows update I was running on my VM 20H2 was the update. I restarted the VM... which is already a challenge as I have a RX5700 card. Then at some point into the installing updates, my dockers start failing and everything crashes! These are the errors I see in my log - and they fill it up!: Jan 9 13:54:49 Unraid kernel: BTRFS warning (device nvme0n1p1): i/o error at logical 630640209920 on dev /dev/nvme1n1p1, physical 154951610368, root 5, inode 497938, offset 79220768768, length 4096, links 1 (path: domains/Windows 10/vdisk1.img) Jan 9 13:54:52 Unraid kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme1n1p1 (-5) Jan 9 13:54:52 Unraid kernel: BTRFS error (device nvme0n1p1): error writing primary super block to device 2 Jan 9 13:54:52 Unraid kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme1n1p1 (-5) Jan 9 13:54:52 Unraid kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme1n1p1 (-5) Jan 9 13:54:52 Unraid kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme1n1p1 (-5) Jan 9 13:54:52 Unraid kernel: BTRFS error (device nvme0n1p1): error writing primary super block to device 2 Jan 9 13:54:54 Unraid kernel: btrfs_dev_stat_print_on_error: 8898923 callbacks suppressed Jan 9 13:54:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 185867, rd 46640931, flush 6441, corrupt 1321, gen 0 Jan 9 13:54:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 185867, rd 46640932, flush 6441, corrupt 1321, gen 0 I have performed a scrub on the cache drives and all seem fine?! Anyone got any ideas? I'm not sure windows updates should be able to cause this level of issues on what should be an image file. Full logs are attached as well. Thanks, Steve unraid-diagnostics-20210109-1406.zip
    1 point
  6. Oh ich depp... das hätte ich mir auch denken können. Danke dir.
    1 point
  7. Du musst IPv4 DNS server assignment auf Static stellen und dann kannst eine IP eintragen.
    1 point
  8. I got somewhere with this, completely forgot I did change out the RAM on this system about the same time as this-heaps of memory errors. I should be keeping a log of hardware changes & dates. Worth noting if anybody else sees this, seems there is a known bug in memtest 5.01 that causes a false positive lockup in some cases if you are testing it with multithreading.
    1 point
  9. One week since I put the new ram and no issue since (no warning, no errors, no docker crash etc...). You were definitly right.
    1 point
  10. I updated the instructions in multiple areas to include these new Grafana plugin steps. These are in the Varken install guides, but it is also nice to have in-line... Version 1.5 Release Notes: Post Number 1 Dependencies:
    1 point
  11. Search the CA App for Intel-GPU-TOP from me, it will install and activate your the quicksync capabilities of your CPU, no other steps are required, as a bonus you get the tool 'intel_gpu_top' @b3rs3rk has a development version of his GPUStatistics plugin available so that you see the current usage.
    1 point
  12. If you have problems I suggest you open a thread in the general support area of the board, giving details and including your diagnostics zip file. That will give people something to work with in order to help you. A general moan, like this, serves no useful purpose.
    1 point
  13. I am on beta 30 and the smb performance is still abysmal for small files or just dealing with large amounts of files. Except for transferring large files, everything takes many times longer with unraid then windows. Backups are a royal pain, takes half a day to do what used to take an hour. I now go out of my way to not do anything over SMB if I can possibly help it. I log into the server and use krusader on the server itself to mess with files anytime I possibly can.
    1 point
  14. Thanks, that's exactly what it was. I forgot I had removed a GPU to put this card in, and that GPU was being passed through to my VM. All set now!
    1 point
  15. Haven’t tried this yet, but You’ll most likely need to setup a second Varken datasource referencing Tautulli-2, and then you’ll need to create a Varken datasource variable so it can be dynamic, then assign that variable to the datasource section of each Varken panel, and then finally set the variable using the dropdown menu in the top of the dash. This concept is the exact same as the Telegraf and Hosts variables that are already setup in the UUD. Have a look at those variables as a reference under the dashboard settings page. You should be able to copy/reverse engineer that methodology and apply it to this use case. Let us know if you get it working snd share some screenshots of your datasources and variables once you do! I’ll see about adding this functionality natively to the dashboard in a future update.
    1 point
  16. You are mixing the rating of your RAM sticks and the supported speed of the CPU. 3600 MHz is over the maximum supported speed for any Ryzen CPU. It it 3200 at best and even lower depending the CPU generation and the number and type of RAM sticks. It is all detailed in the link provided by JorgeB.
    1 point
  17. I also had this issue and did not had much time to look into it. Just to clarify, the console is on the docker page, not in Grafana and you will have to restart the docker to see the new elements. Thanks @mattekure !
    1 point
  18. Can you attach a log or anything? It is very difficullty to troubleshoot with nothing. Also please attach a screenshot of the template page and a screenshot from the error (full window screenshot would be best). EDIT: Oh sorry or do you want to connect with a VNC client? EDIT2: If you want to connect over a VNC Desktop client do the following: Go to the Docker page and click on the icon of the container and click on Edit Click on 'Add another Path, Port, Variable, Label or Device' Select 'Port' Enter '5900' at Container Port Enter your preferred port number at Host Port for example '5910' Click on 'Add' Click on 'Apply' Then you should be able to connect to the container over the port number that you've entered in step 5.
    1 point
  19. I hope you are joking...not to be polemical, but Unraid is not raid, and it's one of the most user friendly os for managing virtual machines I tried: what you are comparing it to? Try to build and manage a qemu virtual machine in any linux distro, one of your choice, and I think you will change your opinion.
    1 point
  20. I've installed the CA Fix Common Problems tool, and I'm currently running the Docker Safe New Permissions process. So time will tell... EDIT: Running Docker Safe New Permissions solved my permissions problem. I now have read/write access in all share directories from Windows.
    1 point
  21. Changed the script to "hdparm -B 255 /dev/sdX" and it seems to work ! Since running the script at boot this morning the LCC numbers have stayed the same for all drives! Wish I had known about this from the start. 2020 has been a bad year for my drives as well haha. Thanks for the help @JorgeB
    1 point
  22. Hi everyone. This thing called the UUD has blown up considerably since the version 1.5 release (and the above post), and by being featured by UNRAID themselves. I've had very little free time since supporting the influx (no pun intended LOL) of new users, DMs, phone calls, virtual meetings, forum questions, and continued development on future versions. So for those that really need my personal help, I have setup something new for you with some perks to go along. Not sure if you guys are even interested in something like this, but my Wife is getting on my case about the time/value proposition of this project, so I am appeasing her with this ha ha. Otherwise, I'll have to back off and just stick to future version development and let the community take the reigns on general support (which I would and do appreciate). So here goes... If you want to support this project with a monthly or discounted annual membership, I am offering the below perks, with possibly some more to come in the future (let me know if you guys have any ideas). Ultimate UNRAID Dashboard Membership Perks: Access to Personal 1-on-1 Support at Developer’s Discretion Access to Pre-Release Code and Development Versions Before Release! X1 Personalized or Custom Panel of Your Choosing! Your Name/User Handle Posted on the Official UUD Forum Topic Members Area! As always, you can also support the continued development of the UUD with a one-time donation if a membership is not your thing. Here are some really cool stats for the UUD so far: Topic Views: 42,831 Topic Replies: 660 Topic Pages: 27 UUD Downloads to Date: 1,183 You can find my donation page by clicking below or the above pics. Thanks guys!
    1 point
  23. Next thing i would try to make sure you use the card bios as file and specify it in the vm settings when you have only one gfx card in use.
    1 point
  24. Actually had been booting bios in legacy mode this whole time, but spurred me to try out UEFI boot, not much luck getting ANY graphics boot though; there is a line in the attached dmesg output that kept repeating while attempting this. So it would appear vendor-reset is operating as expected; I attached some output for that in an attached file if you have any final thoughts there, otherwise I'll take further action to the vendor-reset github, looks like the the kernel helper implementation works just fine so won't clutter this thread with any more on this tangent. Thanks all for the replies and suggestions (+++ to ich777 for this and a number of the other great dockers you have thrown to the community) powercolor_vega_56_vendor-reset_20210111.txt
    1 point
  25. If you boot your Unraid in UEFI mode then try disabling it. Goto your bios and select the non-UEFI usb stick to boot from. This fixed ALL vm related issues i had. It is also suggested by Spaceinvaderone.
    1 point
  26. Virtualizing Unraid isn't supported
    1 point
  27. And now, I am not an Advanced User anymore. I think the status changes after a given number of posts and/or time of registration.
    1 point
  28. Ahhhhh! OK, so easy! I presume that anything else must also be set in background that it will use the new variable. Perfect! Many thanks for your fast help! I will try this!
    1 point
  29. Latest image includes ccextractor v.0.88 Let me know if something needs to be changed. jlesage's image includes a line in the settings.conf that guides to the binary. https://github.com/jlesage/docker-makemkv/blob/master/rootfs/defaults/settings.conf app_ccextractor = "/usr/bin/ccextractor" Please try without changing settings.conf first.
    1 point
  30. Btrfs is more susceptible to hardware issues, so having recurring issues with it (and now without it) could mean a hardware problem, start by running memtest, if that doesn't find anything another thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
    1 point
  31. You can try this, another thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
    1 point
  32. v6.9.0-rc2. Configuration of how dates and times appear is spread over multiple Settings > User Preferences. Display Settings has one kind of date configuration, and then Notification Settings has another. And I don't even think that they're the same; Notifications had DD-MM-YYYY. Please consider putting all date/time options in the Settings > System Settings > Date and Time option, and collapse to a single standard. If I select YYYY-MM-DD, I wouldn't expect that I'd see that format in one place, but a different format in another. (Notification popup vs. a log message for example).
    1 point
  33. I wanted to point out some tips that I may have assumed you guys already know, but it definitely helps a lot when you are scaling the dashboard to your monitor. So I use CHROME when displaying the UUD. Even at 4K resolution, I STILL HAVE TO set the zoom level on the BROWSER to 75% (Control + Dash/Equals Keys) to fit everything nicely. I also press F11 (In Windows) to make the webpage display as full screen (removing the Chrome GUI elements), then finally set the UUD to "KIOSK" mode when displaying it. There is a little monitor looking button in the upper right of Grafana that will display 1 of 3 modes. 1. Dashboard with Variables Visible (Default) 2. Dashboard with Variables Hidden 3. Dashboard in Full Screen Mode with Variables Hidden (Hit Escape to Exit) When I have my dash all setup perfectly, and when displaying it, I'll throw it into its own virtual desktop space (Windows 10), with a dedicated Chrome full screen Window/Tab, with Grafana in Kiosk Full Screen mode. Then I use the Control + Windows Key + Left/Right Arrows (In Windows 10) to navigate to that Desktop Tile when I want to monitor my UNRAID server. So it runs 24/7 in the background with its own virtual screen, and I can quickly switch between my Windows 10 virtual desktop spaces (just like OSX). Furthermore, in Windows 10, you can SNAP any program to a side (Horizontally or Vertically), or any of the 4 corners, by clicking in the app you want to snap, and then pressing the Windows Key + Any Arrow Key. You can also use your mouse and drag a window to a corner to snap it. I usually will snap the UNRAID GUI on the left and UUD on the right so I can see both at once. Or the same thing with file transfers if I want to see how my disks/bandwidth is being impacted, etc... Just thought I would share some usability tips if you are struggling with fitting everything on a single screen and/or best practices. The pro tip is zooming the browser OUT so that it scales everything all at once and uniformly, rather than you having to touch EVERY SINGLE panel manually. Hope this helps!
    1 point
  34. Sure! But I advise you to be really careful messing arround with these config files, Nextcloud seems to be really capricious.. BIG warning there once again, backup the data before any modification, and before making the change, be sure to replicate from within the container the content from /data-ro/nextcloud/data to /data/nextcloud/data, including hidden files. Otherwise, you'll just lose access to the data, or Nextcloud won't be able to access it (cause it'll be missing the .ocrdata file). Be careful to keep the same permissions for the files. Then, In the host, not within the container this time, stop the container, go to your data folder under nextcloud/config. You'll find a config.php file. In this file, you can try changing 'datadirectory' => '/data-ro/nextcloud/data', to 'datadirectory' => '/data/nextcloud/data', Again, this is NOT a workaround, I ended up reinstalling from scratch (at one point, Nextcloud just deleted all my files). I suggest to wait for Dimtar's further investigation.
    1 point
  35. By this guide Plex uses your RAM while transcoding which prevents wearing out your SSD. Edit the Plex Container and enable the "Advanced View": Add this to "Extra Parameters" and hit "Apply": --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 Result: Side note: If you dislike permanent writes to your SSD add " --no-healthcheck ", too. Now open Plex -> Settings -> Transcoder and change the path to "/tmp": If you like to verify it's working, you can open the Plex containers Console: Now enter this command while a transcoding is running: df -h Transcoding to RAM-Disk works if "Use%" of /tmp is not "0%": Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 193M 3.7G 5% /tmp After some time it fills up to nearly 100%: tmpfs 3.8G 3.7G 164M 97% /tmp And then Plex purges the folder automatically: tmpfs 3.8G 1.3G 3.5G 33% /tmp If you stop the movie Plex will delete everything: tmpfs 3.8G 3.8G 0 0% /tmp By this method Plex never uses more than 4GB RAM, which is important, as fully utilizing your RAM can cause an unexpected server behaviour.
    1 point
  36. NOTE: I have taking information from 3rd Party websites and joined them together I have place the links and sites within his post. First we will set up Grafana and get it to monitor the Unraid environment requirements · running Docker service on your unRAID machine · Community Applications installed on your system · SSH access to your unRAID machine (alternatively, a file management application like Krusader or direct share access to your appdata folder) Used Software · InfluxDB - Database solution for storing all of our metrics. · Telegraf - Plugin-based metrics gathering software that will feed our metrics into InfluxDB for storage · Grafana - Metrics visualization software that will draw our dashboard Installation Part One: InfluxDB We'll be using InfluxDB as our database solution to save all the precious gathered metrics. 1. Go to your Apps section and search for "InfluxDB". 2. Install the InfluxDB Docker image by atribe. You don't have to make any changes to the settings except if you previously assigned port 8083 or 8086. If you have allocated ports 8083 or 8086 before, change those ports and make note of the change. Part Two: Telegraf We're using Telegraf to actually collect the metrics and send them to InfluxDB for storage. 1. Go to your Apps section and search for "Telegraf". 2. Install the Telegraf Docker image by atribe. We'll be using Telegraf, NOT unTelegraf (which does not use a configuration file but lets you change settings only by setting variables in the docker container configuration itself). Please remove the variable "HOST_MOUNT_PREFIX" in the installation overview. 3. After the installation go to your Docker section and stop the Telegraf docker. Download the telegraf.conf configuration file to your computer and open it with a text editor of your choice. 4. Navigate to the "Output Plugins" section in said config file and change the "urls" value under [[outputs.influxdb]] to your unRAID server's IP and port 8086 (or your chosen port number if you changed the InfluxDB settings). 5. In the same section, change the "database" name to "telegraf". 6. Navigate to the "Input Plugins" section. After "[[inputs.system]]", add "[[inputs.net]]" in a new line. This will enable network monitoring for all network interfaces. 7. If you want to monitor your docker containers, remove the # in front of [[inputs.docker]]. 8. If you would like to, you can change how often metrics are gathered in line 28 of the file under the section "Configuration for telegraf agent". The more frequently you gather metrics, the more CPU heavy Telegraf gets. I'd recommend 30 seconds. 9. Save the config file on one of your unRAID servers shares. Be careful to name it "telegraf.conf". Connect to your machine via SSH and move the "telegraf.conf" file from your user share to /mnt/user/appdata/telegraf. If a telegraf.conf file already exists, delete it and replace it with your file. 10. Restart Telegraf. From now on it should be feeding metrics into your running InfluxDB database. Part Three: Grafana Grafana will be our front-end, the piece of software you'll be observing all the metrics with. And since nobody likes staring at database tables to see CPU usage, we'll teach Grafana to draw some nice graphs with your collected metrics. 1. Go to your Apps section and search for "Grafana". 2. Install the Grafana Docker image by atribe. Set the GF_SERVER_ROOT_URL to your servers IP address (put http:// in front of it) and change GF_SECURITY_ADMIN_PASSWORD to a password of your liking. 3. After the installation go to Grafana's web UI (http://yourserverip:3000). Log in with the user name "admin" and your chosen password. You'll be greeted by your Home Dashboard. 4. Click on "Create your first data source". Give it a name, select InfluxDB as the type, enter http://YourServerIP:8086 for URL, set access to direct. Under InfluxDB Details set the database to telegraf. Save and test it. Grafana should now be connected to your InfluxDB database. Technical Ramblings screen shoots Download Unraid System Dashboard Next import the dashboard by hovering over the + icon and selecting Import local your Unraid System Dashboard click Load Give it a name and UID, select the database in the drop down and click Import. And now you should have a basic dashboard that is showing some information If not then click on one of the titles, and choose edit, And then choose change $telegrafdatasoruce to InfluxDB Servers So now well done you have Grafane working. Now for the Plex monitoring. Install the Tautulli docker and link it to your Plex Then install Varken Go to www.maxmind.com and create an account And follow the steps MaxMind (Required when using the Tautulli module): 1. Sign up for a MaxMind account. Make sure to verify the account. 2. Go to your Account, then Services > My License Key in the side menu, then click "Generate New License Key". 3. Enter a License key description, and select "No" for "Will this key be used for GeoIP Update?", then click "Confirm". 4. Copy the License Key and fill in the Varken config. For TautullI API go to the TautullI docker webGUI, click on Settings, then Web Interface and at the bottom, copy the API and copy it onto the Varken.ini file Go navigate to appdata\Varken folder And open the varken.example.ini You will need to modify the following https://github.com/Boerderij/Varken/wiki/Configuration change the numbers to false as we will not be using them copy your maxmind Licence Key Enter in your Unraid IP Address [global] sonarr_server_ids = false radarr_server_ids = false lidarr_server_ids = false tautulli_server_ids = 1 ombi_server_ids = false sickchill_server_ids = false unifi_server_ids = false maxmind_license_key = xxxxxxxxxxxxxxxx [influxdb] url = CHANGE THIS TO YOUR UNRAID SERVER IP port = 8086 ssl = false verify_ssl = false username = root password = root [tautulli-1] url = CHANGE THIS TO YOUR UNRAID SERVER IP:8181 fallback_ip = 1.1.1.1 apikey = xxxxxxxxxxxxxxxx ssl = false verify_ssl = false get_activity = true get_activity_run_seconds = 30 get_stats = true get_stats_run_seconds = 3600 restart Varken Test open your plex server and start streaming something open an explorer and type in http://YOUUNRAIDIPADDRESS:8181/api/v2?apikey=YOURTAUTULLIKEY&cmd=get_activity this should give you a page of information, Set up a new data source pointing to the varken database then Download this https://alexsguardian.net/2019/02/21/monitoring-your-media-server-with-varken/ and inport and lastly all you need to do is go through and change the data source in the new dashboard to point to varken. Referance Websites https://www.reddit.com/r/unRAID/comments/7c2l2w/howto_monitor_unraid_with_grafana_influxdb_and/ https://technicalramblings.com/blog/how-to-setup-grafana-influxdb-and-telegraf-to-monitor-your-unraid-system/ https://technicalramblings.com/blog/how-to-setup-grafana-influxdb-and-telegraf-to-monitor-your-unraid-system Plex Referance https://wiki.cajun.pro/books/varken/page/breakdown https://github.com/Boerderij/Varken/wiki/Configuration
    1 point
  37. My apologies for the inconvenience caused. Apparently, as of April 24, 2020, there is no longer any Crucial branded ECC RAM. The server ECC RAM is now produced exclusively under the Micron brand name. When I searched for "ECC" under Crucial brand it brought up "non-ECC" results since ECC RAM is no longer a Crucial product. I did not notice this. Although you have ordered some ECC RAM that will work, for the sake of anyone else looking, here is a link to the Micron ECC UDIMM/Unbuffered RAM on Amazon. And here it is on Newegg.
    1 point
  38. Just figured out how I could get in, by forcing the signin page back to /super in the URL and then matching the credentials in the template. I changed the template credentials to match the Shinobi project defaults, so I am not entirely sure which worked but, I assume its the template that matters. Anyway, off to get into more trouble. Cheers.
    1 point
  39. Is anybody here, who is running a CI/CD pipeline to test, build and push your own docker images on your unRAID server and would like to share your experience and/or setup?
    1 point
  40. Updated 4/29/2017 There are several options for running Plex on unRAID. Here are some details to help you decide which to use. TL;DR If you already have a lot of Linuxserver.io dockers, install the Linuxserver.io Plex docker. If you have a lot of Binhex dockers, install the Binhex docker. Or if you would rather run the official docker and get your support from the Plex forums, install the official Plex docker. They all essentially do the same thing Linuxserver.io (LSIO) Installs the latest public or Plex Pass version, depending on what your Plex account has access to. Restart the container to get the latest. You can override this behavior by setting the VERSION environment variable to rollback to a previous version or install a beta/forum-only release. Like all Linuxserver.io containers, this is updated with the latest upstream security and package updates every Friday at 23:00 GMT. Note that this requires a container restart, which will automatically update Plex according to your VERSION environment variable. Has great support here on these forums. Also see this very helpful configuration information. Binhex Installs the latest public or Plex Pass release that is available from AUR. Update the container to get the latest. You can override this behavior by installing a specific tagged version of the docker to rollback to a previous version (see Q11). It is not possible to install a beta/forum-only release with this docker. Has great support here on these forums. Also see this very helpful configuration information. Plex official docker Installs the latest public or Plex Pass version, depending on what your Plex account has access to. Restart the container to get the latest. You can override this behavior by installing a specific tagged version of the docker to rollback to a previous version. It is not possible to install a beta/forum-only release with this docker. Mainly supported in the Plex forums rather than here. Also see this very helpful configuration information. Limetech Installs the public release only, not Plex Pass jonp said it will be discontinued Needo Not really supported. Not recommended for new installs, existing users should probably switch to another docker. Any plugin The community generally recommends switching to a docker.
    1 point
  41. It's not as bad as you are thinking it is. You only need to add user names for those wanting access to Private shares or wanting write access to Secure shares, everyone else will have "guest" access (by "guest" I mean not as user name "guest" but anonymous access). Actually it's better NOT to have a user name "guest" because that will confuse AFP when/if you use that (LOL that's another story). Sorry, these are not unRaid-specific issues. Ok, to be exact, you don't have to use your windows login name. You can use any name that's defined on the unRaid side. You just have to make sure that the very first access to any share on the server results in you correctly entering that user name/password in the dialog box. Having done that once, your Windows PC will "remember" those credentials for future access to the same server and you won't have to enter the username/password again. Let me give you an operational example. Let's say your windows PC netbios name is "mypc" and your windows login name is "larry". Further let's say your server netbios name is "tower" and you have a single user defined on unRaid side named "curly". Now you open Network and click on "tower" and a window opens showing all the shares. In this state, if you now click on a "public" share, what happens behind the scenes is windows tries to authenticate with the server as user "mypc\larry". On unRaid side, samba (linux smb protocol module) will check if "larry" exists. In this case no, so samba will check if user "mypc" exists, in this case also no. So samba will now see that "guest" access is enabled for the server, so it will reply to Windows with "success" but on the samba side, will associate any further access with the "nobody" user on the linux side. Meanwhile, Windows stores the fact that it successfully connected with user "mypc\larry" in its own credentials cache. Now, after above, you click on a Private share where the only unRaid user with access is "curly". Samba will see the request to the share as user "larry" and tell Windows that the connection is unauthorized (because larry is not in the list of users for the Private share). Windows sees this and presents you with a username/password dialog box. So you enter "curly" as the user. But now Windows sees that you already have a connection to the server as user "larry" and it DOES NOT ALLOW multiple connections to the same server via different user names. This is a well-known limitation/bug in Windows. Some people get around it, e.g., by connecting to the sever using the IP address in order to fool windows into thinking it's a different server. To get above to work, you would close any windows that might have a previous connection to the first public share, then open a command window and type "net use * /delete". That command closes all current server connections. Now click on your Private share and enter "curly" user name it will work this time. Same scenario, but this time after opening Network and clicking on "tower" you happen to click on a Private share first. You get the dialog box and login as "curly" and everything works. Now you click on a Public share and it still works (because on unRaid side all user names are accepted for Public shares). So you see the behavior can be quite different and confusing depending on what you click on after clicking on the server. Now you also know why it's easier to just use the same user names.
    1 point