Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/20/18 in all areas

  1. 22 points
    Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
  2. 17 points
    Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
  3. 11 points
    Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/drils If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/drils You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/driValue: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  4. 10 points
    Too bad we couldn't time linux 4.20 kernel with Unraid 6.6.6.
  5. 8 points
    PSA. It seems openvpn pushed another broken bin, tagged 2.7.3 I get the same error with it as I did with the previously pulled 2.7.2 While they/us try to figure it out, you can change your image to "linuxserver/openvpn-as:2.6.1-ls11" and it should work
  6. 6 points
    Support for Nginx Proxy Manager docker container Application Name: Nginx Proxy Manager Application Site: https://nginxproxymanager.jc21.com Docker Hub: https://hub.docker.com/r/jlesage/nginx-proxy-manager/ Github: https://github.com/jlesage/docker-nginx-proxy-manager Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
  7. 6 points
    On Friday, August 30th, using random.org's true random number generator, the following 14 forum users we're selected as winners of the limited-edition Unraid case badges: #74 @Techmagi #282 @Carlos Eduardo Grams #119 @mucflyer #48 @Ayradd #338 @hkinks #311 @coldzero2006 #323 @DayspringGaming #192 @starbix #159 @hummelmose #262 @JustinAiken #212 @fefzero #166 @Andrew_86 #386 @plttn #33 @aeleos (Note: the # corresponds to the forum post # selected in this thread.) Congratulations to all of the winners and a huge thank you to everyone else who entered the giveaway and helped us celebrate our company birthday! Cheers, Spencer
  8. 6 points
    upgraded the ml30 g9 from 6.6.4 with no apparent issues to functionality. waiting for version 6.6.6 next. 👹
  9. 5 points
    I have made un updated video guide for setting up this great container. It covers setting up the container, port forwarding and setting up clients on Windows, macOS Linux (ubuntu Mate) and on cell phone - Android and IOS. Hope this guide helps people new to this setting up OpenVPN
  10. 5 points
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  11. 5 points
    Most people think "bitrot" refers to flaws in the physical storage media developing over time, such that data written to a storage block at time 0 is not the same data that is read at time 1, and this fact goes undetected. Indeed storage media does degrade over time, however, storage media devices also incorporate powerful error detection and correction circuitry. This makes it mathematically impossible for errors due to degrading media to NOT be detected and then not reported as "unrecoverable media error", unless there is a firmware error or other physical defect in the device. In virtually all h/w platforms, all physical data paths are protected by some kind of error detection scheme. For example, if a DMA operation is initiated to read data from RAM and write to a storage controller over PCI bus, all the various data paths are protected with extra check bits, either in parallel with parallel data paths, or using checksums. This means (again barring firmware errors), random bit errors in data leaving memory and arriving at the storage end up getting detected before ever being written to the media. There is ONE subsystem in modern PC computer systems that is typically not protected however: system RAM. If you have a file in RAM and along comes a random alpha particle (for example) and flips a bit, nothing detects this - in btrfs or zfs (for example) the s/w happily calculates an (incorrect) checksum and the h/w happily writes it all the way to the media. Until one day you read the file and see corruption and say, "Damn you crappy storage device, you have bitrot!", when all along it was written that way. If you really care about flawless storage, and indeed better system in general, you must use ECC RAM for starters. Also use quality components everywhere else, especially PSU. That will be your best defense against "bitrot".
  12. 4 points
    Note: To view the application lists before installing unRaid, click HERE Community Applications (aka CA) This thread is rather long (and is mostly all off-topic), and it is NOT necessary to read it in order to utilize Community Applications (CA) Just install the plugin, go to the apps tab and enjoy the freedom. If you find an issue with CA, then don't bother searching for answers in this thread as all issues (when they have surfaced) are fixed generally the same day that they are found... (But at least read the preceding post or two on the last page of the thread) - This is without question, the best supported plugin / addon in the universe - on any platform. Simple interface and easy to use, you will be able to find and install any of the unRaid docker or plugin applications, and also optionally gain access to the entire library of applications available on dockerHub (~1.8 million) INSTALLATION To install this plugin, paste the following URL into the Plugins / Install PlugIn section: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg After installation, a new tab called "Apps" will appear on your unRaid webGUI. To see what the various icons do, simply press Help or the (?) on unRaid's Tab Bar. Note All screenshots in this post are subject to change as Community Applications continues to evolve Easily search or browse applications Get full details on the application Easily reinstall previously installed applications And much, much more (including the ability to search for and install any of the containers available on dockerHub (1,000,000+) USING CA CA also has a dedicated Settings section (click Settings) which will let you fine tune certain aspects of its operation. NOTE: The following video was made previously to the current user interface, so the video will look significantly different than the plugin itself. But it's still worth a watch. Buy Andrew A Beer! Note that CA is always (and always will be) compatible with the latest Stable version of unRaid, and the Latest/Next version of unRaid. Intermediate versions of various Release Candidates may or may not be compatible (though they usually are - But, if you have made the decision to run unRaid Next, then you should also ensure that all plugins and unRaid itself (not just CA) are always up to date). Additionally, every attempt is made to keep CA compatible with older versions of unRaid. As of this writing, CA is compatible with all versions of unRaid from 6.4 onward. Cookie Note: CA utilizes cookies in its regular operation. Some features of CA may not be available if cookies are not enabled in your browser. No personally identifiable information is ever collected, no cookies related to any software or media stored on your server are ever collected, and none of the cookies are ever transmitted anywhere. Cookies related to the "Look & Feel" of Community Applications will expire after a year. Any other cookies related to the operation of Community Applications that are used are automatically deleted after they are used (usually within a minute or two) Note: Some features (in particular installations from a dockerHub search) may require popups to be enabled on your browser (different browsers enforce this requirement differently). You should whitelist your server to allow popups within your browser if this becomes an issue.
  13. 4 points
    Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  14. 4 points
    Well, it's finally happened: Unraid 6.x Tunables Tester v4.0 The first post has been updated with the release notes and download. Paul
  15. 4 points
    Here is my banner. Think it fits unraid well!
  16. 4 points
    It's not an issue with stock Unraid, the issue is there isn't a patch available for the runc version. Due to the recent docker update for security reasons, Nvidia haven't caught up yet. Sent from my Mi A1 using Tapatalk
  17. 4 points
    Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
  18. 4 points
    Programs run as abc--not www-data. Pretty sure you need to specify the PHP interpreter too. So, it should look like this: sudo -u abc php /config/www/nextcloud/occ db:convert-filecache-bigint
  19. 4 points
    Hello, I have some cron jobs running during the night and morning every day. None of my cron job has run over the night, upgraded to 6.6.4 yesterday. No mover, no added backup script (using User Scripts), no daily trim and no Backup / Restore from CA. cat /etc/cron.d/root # Generated docker monitoring schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 0 3 * * * /usr/local/sbin/mover |& logger # Generated plugins version check schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated Unraid OS update check schedule: 11 */6 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated ssd trim schedule: 5 4 * * * /sbin/fstrim -a -v | logger &> /dev/null # Generated system data collection schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &>/dev/null root@Server:/etc# cd cron.daily/ root@Server:/etc/cron.daily# v total 12 -rwxrwxrwx 1 root root 76 Nov 8 00:08 fix.common.problems.sh* -rwxr-xr-x 1 root root 129 Apr 13 2018 logrotate* -rwxrwxrwx 1 root root 76 Nov 8 00:08 user.script.start.daily.sh* root@Server:/etc/cron.daily# cd .. root@Server:/etc# cd cron.hourly/ root@Server:/etc/cron.hourly# v total 4 -rwxrwxrwx 1 root root 77 Nov 8 00:08 user.script.start.hourly.sh* root@Server:/etc/cron.hourly# cd ../cron.weekly/ root@Server:/etc/cron.weekly# v total 4 -rwxrwxrwx 1 root root 77 Nov 8 00:08 user.script.start.weekly.sh* root@Server:/etc/cron.weekly# cd ../cron.monthly/ root@Server:/etc/cron.monthly# v total 4 -rwxrwxrwx 1 root root 78 Nov 8 00:08 user.script.start.monthly.sh* root@Server:/etc/cron.monthly# Just added my own backup script using "User Scripts" lately. No more, no less. Stuff supposed to be running here, nothing, except for fix common problems but that might be because of reboot after upgrade to .4. My added script: Contents: #!/bin/bash cd /mnt/user/AppdataBackup /usr/local/bin/duplicacy prune -keep 0:5 /usr/local/bin/duplicacy prune /usr/local/bin/duplicacy backup -stats -hash
  20. 3 points
    Which may or may not mean it's a good idea to push that version to a production environment. "Stable" unifi software has caused major headaches in the past, I'd much rather wait until it's been running on someone else's system for a while before I trust my multiple sites to it. If wifi goes down, it's a big deal. I'd rather not deal with angry users.
  21. 3 points
    Why is this your first post to our forum? There are solutions to the "dealbreaker" you mention, and probably solutions to any other problem you might encounter. There are a lot of friendly and helpful people here on this forum that give FREE support. Why haven't you taken advantage of it? There is a plugin that will run mover based on how full cache is here: https://forums.unraid.net/topic/70783-plugin-mover-tuning/ Another solution to your problem is more careful consideration of what you cache. Mover can't move to the slower array as fast as you can fill the faster cache, regardless of how frequently mover runs. So not caching some user shares or not caching very large transfers, for example are ways to deal with that. I don't get why you haven't taken advantage of our forum. This user community is one of the very best on the internet, and one of the very best features of Unraid.
  22. 3 points
    Attached is a debugged version of this script modified by me. I've eliminated almost all of the extraneous echo calls. Many of the block outputs have been replaced with heredocs. All of the references to md_write_limit have either been commented out or removed outright. All legacy command substitution has been replaced with modern command substitution The script locates mdcmd on it's own. https://paste.ee/p/wcwWV
  23. 3 points
    I've bumped Unmanic to 0.0.1-beta5 This includes the following changes: Modify some historical logging of useful stats This sets us up to start adding extra info like eta during the conversion process as well as the stats mentioned below Adds new "See All Records" screen Any suggestions for stats that you would like on this screen would be great Note that due to the changes in logging, only newly converted items will show here. Old stuff wont due to missing statistics data. Sorry Create backups of settings when saving (there were some cases where the settings were invalid but still saved). This corrupted our data and made it impossible to read. So now we test prior to committing changes. FFMPEG was causing some errors on certain files. If you have noted any conversion failures in the past, can you please re-test with this version to confirm if it is now resolved or not. Log rotation If you are debugging, you are spewing a crap ton of data to the logs. This update rotates the logs at midnight every day and keeps them for 7 days. Even if you are not debugging, this is much better. The next milestone is to add extended functionality to the settings: https://github.com/Josh5/unmanic/milestone/4 This will hopefully be the last major tidy up of core functionality. I think that once this milestone is complete we can safely pull this out of beta and look at things like HW decoding and improving on data that is displayed throughout the WebUI.
  24. 3 points
    im on it guys, looks like there has been a switch to .net core which requires changes to the code which ive now done, new image now building
  25. 3 points
    Hi Docker forum Just thought I'd share with you all, some material design icons that I made today for the containers I use in my system: https://imgur.com/a/ehRQ3 I couldn't stand the default smokeping icon looking so bad... So while I only wanted to change that single icon, it looked so nice that I had to rip out all of the other icons to make them look uniform Feel free to use any of these - I could probably add to this album if anyone really wants some more done in a similar style (The Plex icon reminds me a lot of the LSIO's Plex Request logo but it was the best I could do!) They're all 512x512 .png files & look wicked on the unRAID docker page
  26. 3 points
  27. 3 points
    btw just to be clear to anybody here, this is no longer the case, dns is 100% used over the vpn only, the only time its not is for the initial lookup of the endpoint you are connecting to (which is then cached in hosts file). If the vpn goes down name queries do not go over the lan (iptables set not to allow port 53), once the vpn tunnel is re-established (by looking up the endpoint using hosts file) name server queries are then resumed over the vpn tunnel, zero leakage.
  28. 3 points
    What exactly would you have us do? Send a strongly worded email to Gigabyte?
  29. 3 points
    When i did enter the docker container with ssh and did run the following command the error message was gone sudo -u abc php /config/www/nextcloud/occ db:add-missing-indices Only the "The "Referrer-Policy" HTTP header is not set to "no-referrer", "no-referrer-when-downgrade", "strict-origin" or "strict-origin-when-cross-origin". This can leak referer information." message to go
  30. 3 points
    Thank you, good to have a pat on the back once in a while between the kicks in the rear
  31. 2 points
    Since custom banners are coming soon I thought maybe some folks would like to share.
  32. 2 points
    You should never paste random code from the Internet into your computer without understanding what it does... but that aside, if you open up a terminal and paste in the following line: wget https://gist.githubusercontent.com/ljm42/74800562e59639f0fe1b8d9c317e07ab/raw/387caba4ddd08b78868ba5b0542068202057ee90/fix_docker_client -O /tmp/dockfix.sh; sh /tmp/dockfix.sh Then the fix should be applied until you reboot.
  33. 2 points
    There are a lot of us you do not really trust MS, Firefox and Chrome to be our password manager! They already have told us that they snoop into our personal lives and collect as much data about everyone of us as they can accumulate and that they plan on marketing that information. . Perhaps, we are paranoid but with their history and business plan, I would rather err on the paranoid side then truly trusting them with 'protecting' the passwords to my financial and personal life!
  34. 2 points
    So I have been watching this thread for a while...as I was the guy that originally had the problem. Since I downgraded back to 6.6.7, I have had zero problems with database corruption. I have NOT changed the data to point to a single disk, although I planning on doing it this weekend and testing. From the answers here though...that is not going to fix the issue. The corruption is still occurring for some people. I've read through some of the other threads that are "just plex" or some other application...and people are pointing them back to the application creators for fixes. It is NOT just happening for me with Plex, but with every application that uses the sqlite database. Like some of you, I'm questioning things in the kernel or something else that changed in 6.7. And I'm not crazy about updating again until I see an iteration of the OS that provides some fix. Just my thoughts....and yes, I am an absolute newbie to the system. Less than 1 year. thanks rm
  35. 2 points
    I used the following however, I am unable to provide the Time Machine screenshot as I did not configure my VPN to allow discovery. -MW
  36. 2 points
    Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  37. 2 points
    I enjoyed the Podcast. It takes time and energy to do these types of things so I wanted to drop a note to say thank you. @jonp
  38. 2 points
    Something is preventing Unraid from unmounting the disks: Apr 30 01:27:07 Unraid-Server emhttpd: shcmd (416): umount /mnt/user Apr 30 01:27:07 Unraid-Server root: umount: /mnt/user: target is busy. Hence the unclean shutdowns.
  39. 2 points
    Share your background banners with the rest of the Unraid community here!
  40. 2 points
    ^This. In fact you are warned a number of times not to write to the disk you're trying to recover. I haven't used it on XFS disks but I've successfully used it to recover photos from a corrupt SD card. You need first to choose which OS you're going to run it on. My MacBook Pro has an SD card reader so I chose the macOS version. Other versions are available to run under Windows and Linux. They can all read the same file systems. You need to choose which edition of the software you want to download. The Standard Recovery edition is likely to be the one you want. It's free to install and test but unless you buy a licence (€49.95 for personal use) you won't we able to recover any but the smallest of files. So install it on your PC and let it scan the disk. This will take a long time but you get an indication of how long it's going to take and a progress bar. It will show you what it finds as a reconstructed virtual file system and then you can decide whether it's worth paying the money for the licence. I decided it was. You simply select the files you want to recover and choose where to save them.
  41. 2 points
    Time to start from scratch. Pro tip: once you have a base img up and running with vnc, make a backup copy on the array. That way of you mess it up later it only takes a few minutes to copy over a known working good img vs starting over again (learned this the hard way a few times.)
  42. 2 points
    Any LSI Host Bus Adapter based on the LSI SAS2008/2308/3008 chipset are the recommended SATA/SAS PCIe cards. You can find them inexpensively on eBay but avoid Chinese knock offs. I have the Dell H310 (LSI 9211-8i clone) in my server. Eight additional SATA ports for $30 Commonly used cards are: LSI 9211-8i (PCIe 2.0) LSI 9207-8i (PCIe 3.0) LSI 9300-8i (PCIe 3.0) Dell H200/H310 (PCIe 2.0) IBM M1015 (PCIe 2.0) Flash the card to IT mode without the BIOS. Many cards can be found on eBay pre-flashed for $50-$60 f you don't want to deal with that
  43. 2 points
    Would be nice to see this at the bottom like it is on the desktop. As it stands now, you have to scroll back to the top, click the hamburger, and some other stuff. Its a new year and im trying to limit my mindless scrolling, and going back to the top is an easy way to cut down on that. how about it???
  44. 2 points
    I custom built a system designed to deliver unraid VM's its a headless system that has multiple dedicated GPU's and also runs lots of dockers and vm for my IP cameras. I also have a VM for a HTPC that is connected to my loungeroom. Its a pretty amazing threadripper based system and unraid really does make it easy to manage and expand as our needs change. I told myself that I would buy a license when my trial expired.... system hasn't crashed or been unstable so I barely noticed that my trial expired 3 months ago... It still did not crash but I decided today was the day I would support a pretty fantastic product thanks and keep up the great work!
  45. 2 points
    Modify your /boot/config/go file and add the following in between the first line and the last. You will need to reboot for these changes to take affect but you can also run both those after editing your go file so you won't have to reboot. modprobe i915 chmod 777 /dev/dri/* Add the device /dev/dri to your Emby Docker. That's all you need to do. I actually have two Dockers, Plex and HandBrake, that both use Quick Sync. I've used Emby before and it works fine this way.
  46. 2 points
    Unraid version 6.6.5 has a function to calculate container sizes and associated usage. When the "writable" section becomes large, it is usually an indication that a path is set wrong and data is written inside the container. You can control the "log" file sizes by enabling log rotation in the Docker settings.
  47. 2 points
    New to this but had a crack at it
  48. 2 points
    The 'tss' user and group does not exist. Must be something recently added to libvirt/qemu. We'll keep an eye on it. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=904787
  49. 2 points
    Just a note for anyone that is trying to get Wallabag - I have finally had some luck with the official docker repository getting setup on unRAID. https://hub.docker.com/r/wallabag/wallabag/ For smaller installs, the basic version running on SQLite is quite nice. However, configuring it for me took some learning. When importing the container through CA, I mapped two different locations which were noted on the dockerfile: /var/www/wallabag/data was mapped to /mnt/user/appdata/wallabag/data Also /var/www/wallabag/web/assets/images was mapped to /mnt/user/appdata/wallabag/data Then I had CA create the container. Finally, I came up to a huge hurdle with the initial release of Wallabag 2.3.1 - it kept loading with no CSS - essentially unformatted text only on the screen. It turns out that they may have made a mistake in how your individual URL for Wallabag gets populated in the screen. So, I noticed that you can set that variable by editing the container. In the Wallabag container, click "Advanced Options", then in the "extra parameters" section, I added this line as it was mentioned in the Repo Info: -e SYMFONY__ENV__DOMAIN_NAME=https://my.wallabag.url As I'm using Wallabag behind a reverse proxy, I found that I needed my domain/name rather than IP address in that new parameter. Additionally, I had to change the "WebUI" to match the same address. Finally, it worked! Official Wallabag repo on unRAID, all nice and pretty. A little note of caution here as this post may exist for awhile, I totally feel like the DOMAIN_NAME issue is a simple mistake as the current docker build is only 8 days old as of this post. Changing that may not be a requirement in the future, but having to work around it was a great learning experience!
  50. 2 points
    Also, instead of putting in an absolute size, you can also do this: qemu-img resize vdisk1.img +5G That would simply grow the image from its current size by 5GB.