Leaderboard

Popular Content

Showing content with the highest reputation on 04/29/20 in Posts

  1. Privoxy is starting fine at port 8118. Make sure to expose that port (e.g., -p 8118:8118 if you are using docker CLI) and then configure your end machine to use the host's IP address and port 8118. No need for FoxyProxy, Chrome / OS settings work fine. You can check that it's working on http://config.privoxy.org/
    2 points
  2. ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
    1 point
  3. I'm trying to set up an Ubuntu Desktop 20.04 VM. Eventually, the VM locks up and I notice its assigned CPU in UNRAID is pegged at 100%. I've tried several times and sometimes I can get the OS installed, and sometimes it locks up during installation. Wondering if it's the same issue as this? I've attached my XML and diagnostic info. Any ideas? Quite a lot of information for Windows guests here, but I'm having trouble finding equivalent information for Ubuntu guests. I have 3 Windows VM's running perfectly. BTW, I saw @jonp's advice to set UEFI boot mode, i440fx, and setting the CPU mode to emulated. However, I can't get the CPU setting to stick in the GUI. Every time I save it but when I go back into the form view it's set to Host Passthrough. ubuntu.xml nas-diagnostics-20200425-2003.zip
    1 point
  4. 1 point
  5. I think it is working now. The machine has been stable for a few hours. I went through the BIOS settings one more time and deeep in there I think I found the correct c-state settings.
    1 point
  6. Yup audio worked for me. Only thing I can think of is, to double check your audio capture settings, which I'm sure you have done.
    1 point
  7. If the filesystem is mounted just do a regular copy with your favorite tool.
    1 point
  8. Thank you for your reply. Great to hear from a fellow home user like myself. Thank you for giving your view on dockers and giving an alternative to TeamViewer. I will watch the videos this evening and educate myself what to do. As you said it is not a technical response but the words “it just works” made me smile.
    1 point
  9. Can you? Yes. Should you? No. The (multi-drive) cache pool is RAID-based (BTRFS RAID). So either you will have 250GB available space (RAID-1) or 500GB available space (RAID-0). I.e. no improvement. A better option is to mount the 250GB as unassigned and use it for write-heavy activities (e.g. temp download files). That will enhance the life span of both drives and still give you more storage space.
    1 point
  10. I've reached out to shoginn on discord but have not gotten a response. He has archived most of his repos on github When I have some time I will see about forking them and re-publishing on dockerhub under my own account.
    1 point
  11. A disk gets disabled when a write to it fails. You have to take manual corrective action to remove the disabled state which is normally to rebuild the disk either to another drive or back to itself (if you think the drive is OK). The process is documented here in the online documentation.
    1 point
  12. Hi Phillips2010 First up I'm no expert. Just a regular home based nerd type So I read your whole post and thought there is a simple answer - Basically Unraid just works ! I was a long time freenas user, it also works as a NAS storage solution but you can't just add more random drives into a freenas pool. Once I filled up my drives I started looking at options and I stumbled upon Unraid. Unraid really has been the best decision I made, a re-purposed dual E5-2690 dell server, running 32GB ram and a 1TB ssd for a cache barely gets warm running multiple dockers, a few vms, a media server with associated downloading, re-encoding and processing and all whilst dealing with storage for a small business, Access Control for users is straight forward and also works as expected. Managing Shares, Users and Vm's is all pretty straightforward too. I am unsure about Backblaze rules and definitions, but there is a possibility that a pc/mac VM with access to any shares you wanted backed up could allow you to still run the personal backup service? The excelent Community Applications plugin provides access to a large number of functioning docker containers, set up and execution of these is simple for even the less experienced nerds amoungst us. Also no need for teamviewer as within the excelent Community Applications (CA) plugin there exists a fully functional Wireguard app, although you will need to open a port in your Work firewall if thats where the server resides. To help with your decision making I highly recomend you check out the tutorials by Spaceinvader One - https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA you will see these excelent tutorials refered to time and time again throughout the forums which by the way are friendly, helpful and full of posters far more knowledgable than me! Not a very technical answer but to refocus on my opening statement - Basically Unraid just works. Oh and Docker all the way, I did a search on importing media to plex docker from plex vm and found this article https://support.plex.tv/articles/201370363-move-an-install-to-another-system/ Also Spaceinvader One who I mentioned earlier did a video specific to Unraid although he was moving from one docker container to another docker container you can see it here -
    1 point
  13. It works for any profile, as long as the old device remains connected during the replacement.
    1 point
  14. Only the first one will be accessible when using the user share, both can be accessed when using the disk share.
    1 point
  15. Correct to both No need to resize, Unraid will do it on next array start.
    1 point
  16. Nice to see you solved the issue. A thank you is enough and so you did Now that the vm is working again you should check if the bootloader is logging to file (otherwise files will be written in the efi folder at every boot): If you are using clover: Download clover configurator, menu bar --> tools --> mount efi Mount the clover EFI (name should be "EFI") Open the mounted EFI: Inside there's a folder "EFI" --> "BOOT" and "CLOVER" Enter EFI/CLOVER/ folder and open config.plist with text edit Inside the "Boot" section, search for "Debug" and change it to false (log to file disabled) Delete log files in the root If you are using opencore: Download opencore configurator, menu bar --> tools --> mount efi Mount the opencore EFI (name should be "EFI" or "NO NAME") Open the mounted EFI: Inside there's a folder "EFI" --> "BOOT" and "OC" Enter EFI/OC/ folder and open config.plist with text edit Change "Target" from 83 to 0 (log to file disabled) Delete log files in the root
    1 point
  17. hahahah I tried to explain it much simpler so others reading this could understand as well, that is exactly the same thing was talking about. I did not know about this thread though. I'm still working out some other kinks from my side with some other features I'm working on and will try to test the Sparsness if i understand it correctly when I get some more time.
    1 point
  18. I have a GeForce GTX 1050 Ti in an ASRock x370 Taichi motherboard. Unraid seems to find the GPU when I check PCI Devices and IOMMU Groups resulting in these two lines. IOMMU group 16:[10de:1c82] 0c:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1) [10de:0fb9] 0c:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) However, when I make VMs no GPU is available. The same issue happens with the Nvidia Unraid plugin and the GPU Stats Plugin, nothing is ever detected. I'm fairly new to Unraid and could use some help figuring out why the GPU isn't available. I attached a diagnostics report below, let me know if anything else is needed. Thanks for any help. tower-diagnostics-20200429-1208.zip
    1 point
  19. @Mattyfaz i read the issue. I Wil add that path when I am at a computer, or you can create a pull request adding it Doing it now
    1 point
  20. https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive Recommend connecting it to one of the onboard SATA ports first to see if it doesn't happen again.
    1 point
  21. Can you try to switch to opencore to see if the issue is with the bootloader? Simply download the attached file, decompress and overwrite Clover.qcow2 (first make a backup). The file name is Clover, but it's opencore (so you don't have to change xml, but simply replacing the qcow2). I changed smbios data too (still iMac14,1). Clover.qcow2.zip
    1 point
  22. ok lets have a look at a full log, can you do the following procedure:- https://github.com/binhex/documentation/blob/master/docker/faq/help.md
    1 point
  23. That's strange..can you post your vm xml? Please when you post it delete the osk. And also please confirm the following: 1- Catalina 10.15.1 and unraid 6.8.2 works well 2- Catalina 10.15.4 and unraid 6.8.2 has issues 3- Catalina 10.15.1 and unraid 6.8.3 has issues
    1 point
  24. Try the attached clover qcow2, backup yours first if this doesn't work. I modified the original Clover.qcow2 from the macinabox github repository, so if you changed SMBIOS data you need to change them again. Config.plist is also the same as the original macinabox, sligtly modified, it should be compatible with the latest revision. It's the latest release, revision 5114 but I cannot test, so not sure if it will boot or not. Just try.. Clover.qcow2.zip
    1 point
  25. I'm not sure but I don't think that he explains how to update the bootloader, there's a pre-made qcow2 clover image, the one you have (r5097), that during the setup the macinabox downloads.
    1 point
  26. I would update clover first (latest release now is v5 r5114) and check if there is any difference (from r5098 there are improvements in Catalina compatibility). Mount your efi, download from github the iso image, mount the iso, overwrite files from iso to EFI, check config.plist and eventually adpt to the new clover revision (compare config.plist files to see if there are new options added/removed). Try to inject also lilu+whatevergreen kexts.
    1 point
  27. I can't install plugins -- I searched for an answer but have not found one. When I try to select the egg file I get [Object FileList] When I try to install the plugin by just copying the egg file to the plugin directory that also doesn't install it In NerdPack I've turned on all the Python options The plugin I am trying to install is AutoRemovePlus which is 2.0 compatiable. Any idea what the issue is?
    1 point
  28. A few posts back discuss the decision to remove it. Any python package that doesn't need compiling can be installed with pip and setuptools. https://forums.unraid.net/index.php?/topic/35866-unRAID-6-NerdPack---CLI-tools-(iftop,-iotop,-screen,-kbd,-etc.)#entry838361
    1 point
  29. Thanks for this tool. I can't find docker-compose on my nerd tools. Am I crazy? What am I missing? https://imgur.com/a/I2o8T4N Thanks EDIT : It looks like maybe it was removed? Tracing back this thread, I installed python3, python-pip, python-setuptools and libffi and then went to console and pip3 install docker-compose and that seems to have installed it? Does this sound accurate?
    1 point
  30. I like having my linux terminal sessions customized with the ZSH shell and oh-my-zsh with git and other plugins. I can set this stuff up on unRAID, but lose all terminal customizations on system reboots. ZSH can stay installed with the Nerd Pack plugin, but I have to reinstall oh-my-zsh via curl/wget and copy my backed up .zshrc back and change my default shell again. I can probably setup a script to do all that via the go file, but is there a better way to just maintain terminal customizations and configs across system restarts? It seems pretty inefficient, especially if I have to re-install oh-my-zsh on each boot.
    1 point
  31. 1 point
  32. I figured out a simpler way, just define $HOME for scripts: HOME="/root" sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"
    1 point
  33. For anyone else chasing this broken-Ethernet-at-boot issue, I seem to have fixed it on my system by changing the bridge PCI bus to '0x00' and slot to '0x05' thusly: ... <interface type='bridge'> ... <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> ... Since making the change, over a couple dozen boots, I haven't again seen macOS report the (virtual) Ethernet cable as being unplugged.
    1 point
  34. In this 2017 post: @llonca13 says: That sounds promising! Though I haven't been able to find the original @SpaceInvaderOne (formerly @gridrunner) post referenced, I have attempted to change the slot number in the XML as follows: ... <address type='pci' domain='0x0000' bus='0x01' slot='0x05' function='0x0'/> ... However, when I make just the single-digit change and then click Update in the XML View, I immediately get this error pop-up from Unraid: VM creation error XML error: Invalid PCI address 0000:01:05.0. slot must be <= 0 Any thoughts on what I'm doing wrong? Do I need to somehow update the definition of PCI bus 1, or otherwise "create" slot 5? Or am I just all confused because @llonca13's original post above is only about passing through a physical NIC, and wouldn't apply to a virtual bridge?
    1 point
  35. You don't need to do anything. Docker compose isn't some complicated program it's a single binary file that you add somewhere in your $PATH and you're good to go. The whole install is below. sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
    1 point
  36. Thank you very much for this guide the performance of this "VM" is amazing (and with 0 aditional kexts WTF ). I know how much time it took and how much time you saved for alot of people.Thank you for that. I have one question..did anyone passthrough entire integrated usb pci controller with success? ..whatever i try it doesnt work..integrated motherboard sound passthrough works great (alc662,with patched applehda) And why is my About this mac showing core2duo(i have sandy bridge i5 2400)? I dont have anything extra in my xml that define cpu(only at the end,qemu: penryn) Btw if anyone has a problem with Ethernet showing disconnected - you have to edit your xml under ethernet device - address type - put the slot number higher (for example slot=0x05).I lost couple of hours trying everything i could think off until i found this solution somwhere on google EDIT: just seen that gridrunner already pointed out this in another thread.serves me right for not reading everything first
    1 point
  37. Figured I would post the solution back here.. https://lime-technology.com/forum/index.php?topic=51498.msg533288#msg533288
    1 point
  38. Have you tried nuking your docker image? Turn off docker in settings and go to your appdata folder and delete the docker image? That's about all the help I can think to offer at this point. Well I'm getting closer. Issue was the .conf file location. The required location was noted as /mnt/user/appdata/telegraf/telegraf.conf. It was creating this path automatically, but telegraf.conf was an actual directory. So I removed the directory and placed the file at /mnt/user/appdata/telegraf/. Hope this makes sense, but that is at least got things moving. The issue I have now is the docker log is giving me this error: Error parsing /etc/telegraf/telegraf.conf, line 70: field corresponding to `logfile' is not defined in `*config.AgentConfig' I believe it has something to do with this line...Not sure what to do to fix the error. ## Specify the log file name. The empty string means to log to stderr. logfile = ""
    1 point