Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/26/19 in Posts

  1. 7 points
    If someone does want to help, then in essence the problem is the runc version included in Unraid v6.6.7 and v6.7.0rc5 doesn't currently have a Nvidia patch available for it, so one approach we've thought of is replacing the runc available in Unraid completely with a separate version that does have the patch available although we're not sure if this will cause problems at all. Bassrock and me have been working on it, but he's very busy at the moment and I'm on a family holiday with my wife and daughter so have been limited to working on this after everyone else has gone to bed. I'm not willing to spend any time in the day working on it as I see little of my daughter/wife when we're working and at home, so I'm cherishing every minute of the time I am spending with them on holiday, and for me that is way more important than anything Unraid or Nvidia related. Sent from my Mi A1 using Tapatalk
  2. 4 points
    It's not an issue with stock Unraid, the issue is there isn't a patch available for the runc version. Due to the recent docker update for security reasons, Nvidia haven't caught up yet. Sent from my Mi A1 using Tapatalk
  3. 4 points
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  4. 4 points
    I'm seeing the same thing as well. Found this when searching around: https://forums.sabnzbd.org/viewtopic.php?t=23364 Started up normally after doing the following: 1. Connected to the docker (my container name was "sabnzbd") How to connect to a running Docker container? There is a docker exec command that can be used to connect to a container that is already running. Use docker ps to get the name of the existing container. Use the command docker exec -it <container name> /bin/bash to get a bash shell in the container. 2. cd /config/admin 3. mv server.cert server.cert.old (or delete it, but I was trying to play it safe) 4. mv server.key server.key.old (or delete it, but again playing it safe) I did an ls-al then and saw that the server.cert was immediately recreated but not the server.key I checked SAB and it was then running normally
  5. 3 points
    id like to see Rysnc as gui based this trying to figure out the command lines also shows you the status of completion of the rysnc… as I don't see it in the command line.. statiting the complete overall progress complete of rysnc syncing but it be nice a gui based like free nas… check box this etc... its just a thought
  6. 3 points
    @ezhikfound that downgrading LSI 2008/2308 firmware to p16 restores the trim function with current driver, so the trim issue is caused by the driver and the firmware and the filesystem, and while I personally wouldn't like to be running an older firmware it might be worth considering for users without a better option.
  7. 3 points
    #!/bin/bash #This should always return the name of the docker container running plex - assuming a single plex docker on the system. con="`docker ps --format "{{.Names}}" | grep -i plex`" echo "" echo "<b>Applying hardware decode patch...</b>" echo "<hr>" #Check to see if Plex Transcoder2 Exists first. exists=`docker exec -i $con stat "/usr/lib/plexmediaserver/Plex Transcoder2" >/dev/null 2>&1; echo $?` if [ $exists -eq 1 ]; then # If it doesn't, we run the clause below docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2" docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";' docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder" docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2" docker restart $con echo "" echo '<font color="green"><b>Done!</b></font>' #Green means go! else echo "" echo '<font color="red"><b>Patch already applied or invalid container!</b></font>' #Red means stop! fi EDIT: Just corrected some flawed assumptions on the logic above. Using grep -i to grab container name so that it matches without case sensitivity. Using a variable to capture the return value of the stat, since docker exec -it can't be used and docker exec -i always returns 0. Flipped -eq 0 to -eq 1 since that was the inverse of the intended behavior. Only weird thing is something prints "plex" lowercase and I don't know where. EDIT2: Figured that out, docker restart $con prints the name of the container once it's restarted. Could redirect the output to /dev/null, though.
  8. 2 points
    Parity Check Tuning plugin The Parity Check Tuning plugin is designed to allow you to split a parity check into increments and then specify when those increments should be run. It will be of particular use to those who have large parity drives so that the parity check takes a long time and who leave their Unraid servers powered on 24x7. The idea is that you can specify time slots when the increments should be run and these can be chosen to be at times when the Unraid server is likely to be idle. As an example on my system I have a 10TB Parity Disk and an uninterrupted Parity Check takes about 30 hours to complete.. I have my normal scheduled parity checks set to run monthly. By using this plugin to run using 3 hour increments the elapsed time extends to 10 days (10 x 3 = 30) but I do not notice any impact on my normal use as the increments are run when I am not using the system. Once enough increments are run to complete the scheduled parity check then no further increments will be run until the time for the next scheduled check comes around. The Settings page is added as an extra section to the Settings->Scheduler page (see the screenshot below) in the Unraid GUI as this seemed the most logical place for it to appear. The initial release of the plugin allows you to specify a single daily time slot for running increments. This seems to satisfy the basic Use Case but I am amenable to others making Use Cases for something more sophisticated. Debug feature If you enable the option for debug logging then you will see reasonably verbose entries appearing in the syslog about how this plugin is functioning internally. All these entries will include the word DEBUG so It is clear that they have been activated by turning on the Debug logging. Although this feature is primarily aimed at tracking down any issues that might be reported and developing new features the entries will be meaningful to any users interested in such matters. When this option is set to Yes then you are offered an additional option of Hourly for the frequency at which this plugin should pause/resume parity check increments. This was added primarily to help with testing and to help track down any issues that users might experience in using the plugin. Early feedback ahs suggested that users new to this plugin can use this feature as a way of getting a feel for how the plugin operates. Built-in Help The settings page for this plugin has built-in help to describe the meaning of the various settings. You can click on the description text for any particular setting to toggle it on-/off for that particular setting or you can turn it on/off at the page level by using the standard Help toggle in the Unraid GUI. Suggestions for improving the wording or expanding on the provided text are welcomed as it is not intended to produce any separate documentation. Planned Enhancements There are a few enhancements that are already planned (and on which work has started) The settings screen currently has an entry as to whether parity checks started outside a normal scheduled one (e.g. manually started or started by the system after an unclean shutdown) should also be run in increments. It is likely that in such a scenario the user may be interested in getting their array back to health as soon as possible and would like the check to run without interruption. At the moment the you can only specify the Yes as the code to support the No option is not yet complete. Improve the history kept about parity checks that are paused and resumed using this plugin so that actual running time and total elapsed time of the parity check are both tracked. Pause parity checks if disks overheat and resume them when they cool down. Ideally an Unraid server has sufficient cooling that such a feature should not be required but anecdotal evidence suggests that a significant number of people have problems with systems over-heating under load. Suggestions for other possibilities are always welcomed. Wish List This a holder for "blue sky" ideas that have been expressed for which there is no idea if it is even technically possible. They are kept here as a reminder and for others to perhaps expand on, and even perhaps come up with ideas for implementation.. Auto detect idle periods: The idea is that instead of the user having to specify specific start/stop times for running parity check increments the plugin should automatically detect periods when the system is idle to resume a parity check. This would need the complementary option of automatically detecting the system is no longer idle so that the check can be paused. Avoid running parity check if mover is running. Mover and parity Checking severely degrade each others performance. Some way of removing (or at least minimising) this conflict would be desirable. Thee are lot of permutations that need to be thought through to come up with a sensible strategy. Stop docker containers during a parity check. The ability to schedule the parity check to stop specified docker containers prior to check running and restart the docker containers after the check is paused or completed. A workaround for this would be to use the User Scripts plugin to do this although an integrated capability would be easier to use. Resume parity checks on array start. The current Limetech implementation of pause/resume does not allow a parity check to be started from any point except the beginning. If the ability to start at a defined offset is ever provided then this could be implemented. Partial parity Checks: This is just a different facet of the ability to Resume parity checks on array start where you deliberately set the system up to perform part of a parity check with reboots in between the parts. Feedback Feedback from users on the facilities offered by this plugin is welcomed, and is likely to be used to guide the direction of any future enhancements. It will be interesting to hear how useful users find this plugin to be in the normal running of their system. Please feel free to suggest any changes that you think would enhance the experience even if it only rewording of text . Requirements Unraid 6.7 rc3 or later Community Applications (CA) plugin. It is expected that this plugin will be installed via the Apps tab (i.e. the Community Applications plugin) and the appropriate template has been prepared to allow CA to handle it tidily. Installation The parity Check tuning plugin is available for installation by using the Community Applications plugin. If you navigate to the Apps tab and search for 'Parity Tuning' this plugin will show up and it can be installed from there. Once the plugin is installed then if you go to Settings->Scheduler in the Unraid GUI you will see an extra section has appeared that allows you to specify the settings you want to be used for this plugin. Restrictions/Limitations This plugin does not initiate a parity check - it only provides facilities for pausing/resuming them according to the specified criteria. If there is no parity check running during the timeslot specified for an increment then this plugin take no action. If the array is stopped for any reason then the current progress of a running parity check is lost. This means that the next time the array is started the parity check will need to be restarted from the beginning. This is a restriction imposed by the current Limetech implementation. The plugin is designed so that this restriction can easily be removed if Limetech can provide a way of starting parity checks at a specified offset rather than starting all parity checks from the beginning.
  9. 2 points
    Ok, this may be dumb, but I have a use case that this would be really effective on. Currently I pass trough 2 unassigned 10k drives to a vm as scratch disks for audio/video editing. In the vm, they are then setup as Raid 0. Super fast. The problem becomes that the drives are then bound to that VM. I can't use the disks for any other vm nor span a disk image (work areas) for separate vm's on that pool. I think it would be better to have the host (unRaid) manage the raid, and then mount the "network" disks and use them that way. Since the vm uses paravirtualized 10GBE adapters, performance should be no issue. And multiple vm's could access them as well. Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance. Plus, i'm not sure how mixing SSD's and spinners would work out. Maybe ok? I'm sure someone has done that. TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!)
  10. 2 points
    ok thanks for the above, so looks like you arent low on space, so your issue must be down to one or both of the following issues:- 1. corruption to cache drive 2. corruption of docker img (this contains all docker images and containers) so its most probably docker image corruption, you will need to stop the docker service, delete your docker image and then re-create it, then restore your containers, steps to do this (stolen from a previous post):-
  11. 2 points
    I have the same scenario, here's how I handle it. pfsense VM set to autostart user scripts at array start kicks off pfsense monitor script pfsense monitor script waits until successful ping of internal gateway, signifying pfsense is online, then runs additional VMs using virsh start vmname unraid docker delays stagger starts with a docker that doesn't need internet with a 2 minute delay, which is normally plenty of time for pfsense to get rolling. Other dockers are set for many seconds delay, in a logical order. If I couldn't rely on a quick connect to the internet, I'd add the dockers to my pfsense monitor script and remove them from auto start. As it is right now, I've had no issues with things firing off correctly. Then again, I hardly ever restart my server, it's generally up for several months at a time without restart.
  12. 2 points
    plex have screwed up on build 1.15, if i were you i would roll back, same for you @RevLaw if you dont know how to do that:- https://forums.unraid.net/topic/44142-support-binhex-plex-pass/?do=findComment&amp;comment=725645
  13. 2 points
    @1812 weird, I did some switch research as I’m re-doing home network and have a few of these on the way, even before I seen your post. Well see if I run into same issues you did getting switches a few months later 🤔😬
  14. 2 points
    other people with plex pass issues - comforting for me that its not something related to the docker image ive built, not so comforting for those who have plex natively installed and cant roll back easily:- https://www.reddit.com/r/PleX/comments/auo6jd/new_beta_pms_version_available_1151710ece95b3a1/
  15. 2 points
    i'm running my two unraid as VMs on top of ESXi for years. i'm using PlopKExec without problems.. https://download.plop.at/plopkexec/plopkexec.iso if you don't run any VMs, 4GB would be reasonable, but you can add more latter if you want. but according transcoding - you should just run some tests to see, how much it eats your CPUs, start with 4 vCPU. Free version of ESXi have a 8 vCPU limitation. i'm trying avoid any transcoding in my plex server. but you can go new unraid nvidia route and transcode with your GPU - check the plugin forum page. if you pass your IBM 1015 to unraid VM, then ESXi has nothing to do with it and drives connected to it - unraid manages spindowns of these drives. and according SSDs attached to host - i have two in my server and never noticed any problems with them.
  16. 2 points
    V6.6.7 and V6.7.0rc5 uploaded. Sent from my Mi A1 using Tapatalk
  17. 2 points
    Enable Hardware Decoding in Plex #!/bin/bash con="plex" echo "" echo "<font color='red'><b>Applying hardware decode patch...</b></font>" echo "<hr>" docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2" docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";' docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder" docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2" docker restart $con echo "" echo "<font color='red'><b>Done!</b></font>" Description: Translation of manual steps required to patch Plex docker to enable hardware decoding if you're running an Nvidia version of Unraid. Quick Start: Set up and run as a script every time Plex updates. If your container is not called "plex", change the "con" variable (see notes) Disclaimer: If it can be improved (or if it's dangerously wrong), please let me know. Notes: Should be run when Plex is installed/updated From the command line, run "docker ps" to see what your plex container is called. Set that as the "con" variable in your script (mine is "plex") This script is only required until Plex officially supports hardware decoding It preforms the same as recommended in the NVidia plugin support thread here (where it was originally published), namely... Renames the file "Plex Transcoder" to "Plex Transcoder2" Creates a new "Plex Transcoder" file with the suggested contents Changes permissions on both "Plex Transcoder" and "Plex Transcoder2" files (not sure it's required on Transcoder2 - seemed to work for me without) Restarts the Plex container (not sure if required, but doing it anyhow) Probably best nothing is playing whilst the script is run You'll need to have Plex running for the script to work. Would require different code if stopped (would probably be safer to stop the container first, make the changes then start again, but here we are) Run "nvidia-smi dmon -s u" from the terminal (not within Plex container) to check whether the decoding is working. Set a video to play in a transcoded state, and the 3rd and 4th columns from the end should be non-zero This includes the "exec" addition to the Plex Transcoder file contents Good luck!
  18. 2 points
    Unionfs works 'ok' but it's a bit clunky as per the scripts above. Rclone are working on their own union which would hopefully include hardlink support unlike unionfs. It possibly will also remove the need for a seperate rclone move script, automating transfers from the local drive to the cloud https://forum.rclone.org/t/advantage-of-new-union-remote/7049/1
  19. 2 points
    Key elements of my rclone mount script: rclone mount --rc --rc-addr=172.30.12.2:5572 --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --buffer-size: determines the amount of memory, that will be used to buffer data in advance. I think this is per stream --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires --drive-chunk-size: for files uploaded via the mount. I rarely do this, but I think I should set this higher for my 300/300 connection --fast-list: Improves speed but only in tandem with rclone rc --timeout=1h vfs/refresh recursive=true --vfs-read-chunk-size: this is the key variable. This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback. 128M seems to work for most but try 64M and 32M --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc. I've set the limit as off to not cap how much is requested Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683
  20. 2 points
    How do I replace/upgrade my single cache device? (unRAID v6.2 and above only) This procedure assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't. Stop all running Dockers/VMs Settings -> VM Manager: disable VMs and click apply Settings -> Docker: disable Docker and click apply Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer" Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page When the mover finishes check that your cache is empty (any files on the cache root will not be moved as they are not part of any share) Stop array, replace cache device, assign it, start array and format new cache device (if needed), check that it's using the filesystem you want Click on Shares and change to "Prefer" all shares that you want moved back to cache On the Main page click "Move Now" When the mover finishes re-enable Docker and VMs
  21. 1 point
    I am using /tmp and have verified several times that Plex is transcoding there, Being an OS folder outside of /mnt it is in RAM and works fine. I have no comment on /dev/shm as I have never used it since /tmp works for me.
  22. 1 point
    I have. I just prefer to use LSIO images if possible because they are always very well maintained and constantly updated.
  23. 1 point
    Look in the Musicbrainz thread. You added the tag to the Plex container and not the Musicbrainz container. Click the Add container button and choose plex and it should be the same as before. Probably can add it back from CA also.
  24. 1 point
    Go to Tools - Diagnostics and attach the complete diagnostics zip file to your next post.
  25. 1 point
    I just updated the mount script - give it another whirl as it ran quite fast for me. Try a full Plex scan and you'll see the speed difference