Leaderboard

Popular Content

Showing content with the highest reputation on 04/12/21 in all areas

  1. Just a friendly reminder everyone, discussing how to circumvent other vendors licensing will get content removed / banned from this forum. Tread lightly here
    3 points
  2. Please provide the instructions for doing this in the Official Unraid Manual (the one you get to by clicking lower right-hand corner of the GUI) and not just in the release notes of the version number when the changes are introduced. Remember that many folks are two or three releases behind and then when they do upgrade they can never seem to locate the instructions which results in unneeded queries that the folks who provide most of the support for Unraid have to deal with. Having an updated manual section that deals with these changes makes pointing these folks to find what they will have to change a much similar task... EDIT: I would actually prefer that you link directly to the manual sections in the change notes. That way the information will be available in the manual when the changes are released!
    2 points
  3. PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU The first macinabox is now been replaced with a newer version as below. Original Macinabox October 2019 -- No longer supported New Macinabox added to CA on December 09 2020 Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell.
    1 point
  4. This release contains bug fixes and minor improvements. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Thank you to all Moderators, Community Developers and Community Members for reporting bugs, providing information and posting workarounds. Please remember to make a flash backup! Edit: FYI - we included some code to further limit brute-force login attempts; however, fundamental changes to certain default settings will be made starting with 6.10 release. Unraid OS has come a long way since originally conceived as a simple home NAS on a trusted LAN. It used to be that all protocols/shares/etc were by default "open" or "enabled" or "public" and if someone was interested in locking things down they would go do so on case-by-case basis. In addition, it wasn't so hard to tell users what to do because there wasn't that many things that had to be done. Let's call this approach convenience over security. Now, we are a more sophisticated NAS, application and VM platform. I think it's obvious we need to take the opposite approach: security over convenience. What we have to do is lock everything down by default, and then instruct users how to unlock things. For example: Force user to define a root password upon first webGUI access. Make all shares not exported by default. Disable SMBv1, ssh, telnet, ftp, nfs by default (some are already disabled by default). Provide UI for ssh that lets them upload a public key and checkbox to enable keyboard password authentication. etc. We have already begun the 6.10 cycle and should have a -beta1 available soon early next week (hopefully).
    1 point
  5. DEVELOPER UPDATE: 😂 But for real guys, I'm going to be stepping away from the UUD for the foreseeable future. I have a lot going on in my personal life (divorce among other stuff) and I just need a break. This thing is getting too large to support by myself. And it is getting BIG. Maybe too big for one dash. I have plenty of ideas for 1.7, but not even sure if you guys will want/use them. Not to mention the updates that would be required to support InfluxDB 2.X. At this point, it is big enough to have most of what people need, but adaptable enough for people to create custom panels to add (mods). Maybe I'll revisit this in a few weeks/months and see where my head is at. It has been an enjoyable ride and I appreciate ALL of your support/contributions since September of 2020. That being said @LTM and I (mostly him LOL) were working on a FULL Documentation website. Hey man, please feel free to host/release/introduce that effort here on the official forum. I give you my full blessing to take on the "support documentation/Wiki" mantel, if you still want it. I appreciate your efforts in this area. If LTM is still down, you guys are going to be impressed! I wanted to say a huge THANK YOU to @GilbN for his original dash which 1.0-1.2 was based on and ALL of his help/guidance/assistance over the last few months. It has truly been a great and pleasurable experience working with you man! Finally, I want to say a huge thanks to the UNRAID community and its leadership @SpencerJ @limetech. You guys supported and shared my work with the masses, and I am forever grateful! I am an UNRAIDer 4 LIFE! THANKS EVERYONE!
    1 point
  6. Well, I'm good, but I didn't know I was that good! 🤣
    1 point
  7. Honestly this is pretty cool and I'm going to use it myself, thank you 😁 Just as a warning to anyone who decides to use this it's definitely worth taking note of above: This script also assumes your Default VM storage path is /mnt/user/domains/, if it's not, it will create a share called domains unless you update the references to this path. Also note that VM Manager needs to be started before you run the script if you are using the default Libvirt storage location which is a disk image that gets mounted when you start VM Manager. Otherwise when it gets started /mnt/user/domains/save will be bound to a location that no longer exists, and you'll possibly either get errors when pausing a VM or the local location won't have any data saved in it. So if you accidentally run the script with VM Manager stopped, you need to need to run the script again after starting it, or specifically the line: mount --bind /mnt/user/domains/save /var/lib/libvirt/qemu/save Also to revert to the original option I'm pretty sure you can just restart; or make sure no VMs are paused and run: sed -i -e "s/domain_save/domain_suspend/" /usr/local/emhttp/plugins/dynamix.vm.manager/include/VMajax.php You can then safely delete /mnt/user/domains/save which will likely be empty. I'm not going to include the code for deletion here in case someone makes a typo and deletes the contents of their array. Also there is a possibility that this will get broken / possibly become slightly dangerous in a new release, to be clear this is tested as working on versions 6.8.3 - 6.9.2. If anyone is really interested in making the change permanent; it appears relatively un-breaking and straight-forward to implement so like @bonienl said, best to make a feature request. This mount option has the benefit of not wasting space in libvirt.img, so if someone makes a feature request definitely reference @uek2wooF's script, because it's awesome.
    1 point
  8. That was the plan and it seems to work. It should be the default save since the others aren't really useful for much. Sorry for the delayed response, I don't get email notifications for some reason.
    1 point
  9. Ah okay zenstates in go isnt necessary for your cpu
    1 point
  10. 1 point
  11. Yes SimonF the normal console was functional.
    1 point
  12. I upgraded from 6.9.1 to 6.9.2 on both my servers with no problems. In fact I thought the upgrade went faster than normal. Not complaining one bit. You guys are awesome thank you for continue support of Unraid.
    1 point
  13. Howdy SimonF.....anything to help out the team!
    1 point
  14. Problem solved. I'm documenting this in case someone runs into the same issue. The last 2 lines had already been removed from the go file. touch /boot/config/modprobe.d/i915.conf worked, and I've rebooted since just to be safe, but the graphics card was still not working with Intel GPU Top nor GPU Statistics. I then noticed that my Windows VM was set to iGPU. I stopped the VM, changed this to VNC and re-started the VM. Boom. Everything is working. So there you go, if you're having a similar problem, make sure your VMs aren't using the iGPU.
    1 point
  15. No worries man take your time. You are doing awesome work and I appreciate it.
    1 point
  16. Du kannst die Logs auf den USB Stick schreiben lassen. Das habe ich grundsätzlich aktiv.
    1 point
  17. For those with the startup script errors: I just redownloaded a configuration file from PIA website, deleted the old one, and restarted Deluge and it seems to have fixed it. It looks to me that they renamed it to exclude the "nextgen-" from the file name. I don't know if that matters.
    1 point
  18. Ich habe dir hier geanwortet: https://forums.unraid.net/topic/105609-migrate-synology-docker-to-unraid/?do=findComment&comment=975782
    1 point
  19. Thats all you need. You don't need the containers itself. They contain only the app which can be re-installed at any time. The only requirement is to use the same docker container from the same maintainer and use the same container version. Creating a tar would be the first step. The only culprit could be the owner of the files. Unraid needs the user id 99 and group id 100 and Synology probably uses different ids. So it could be possible that you need to change the owner after extracting the files as follows: chown -R 99:100 /mnt/user/appdata/path_of_your_container
    1 point
  20. Reverting back to 6.8.3 If you have a cache disk/pool it will be necessary to either: restore the flash backup you created before upgrading (you did create a backup, right?), or on your flash, copy 'config/disk.cfg.bak' to 'config/disk.cfg' (restore 6.8.3 cache assignment), or manually re-assign storage devices assigned to cache back to cache This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'. If you downgrade back to 6.8.3 these settings need to be restored.
    1 point
  21. Hi @lnxd, thanks for checking, it at least confirms in not doing anything obvious and it should "just work". FYI no CPU maxing out, just for the first few seconds during bios start and then it all levels out. I have previously done the vendor ID, but i will re-do again as well as the other options as that was when i first started troubleshooting so i may have missed something out, i will let you know how i get on. The disablement of the SR-IOV in the BIOS was a line of thought i had been down in before, but made no difference @giganode Unfortunately yes, using the tech power up (or completely removing the vbios) had no change on the issue. First VM start no issues, all subsequent reboots/starts gives me a code43 until host restart.
    1 point
  22. Server seems fine for his "workload". Within you NIXDESK I would passthrough the 1TB M2 directly to a Win 10 VM. So you are able to use your Win 10 Installation within a VM or if you want to boot directly from it, you can just boot directly from it 🙂 It should then looks like this: With this setting you can use your already installed and setuped Windows within Unraid as VM, and you are still able to boot directly from it. (If you want to). Keep in mind that just one VM at once can be booted up with passthrouged Graphicscards, USB-Controllers and so on... Maybe for the Ubuntu VM it is enough when you just use VNC as Graphicscard and emulated USBs
    1 point
  23. Not arguing against docker-compose, but as a general rule, docker is docker is docker, and there aren't really any "Unraid Specific" images present.
    1 point
  24. Hatte ich ja versucht. Bin auch der Meinung, dass ich das getan hätte, aber ich habe ja hier erfahren, dass CA evtl. nicht sauber arbeitet. Muss ich heute Abend nach der Arbeit mal schauen. Melde mich, wenn ich es getestet habe. --------edit-------- ich habe mal wieder den container gelöscht und mit CA den Ordner gelöscht. Dieses Mal nachgeschaut und der Ordner ist wirklich gelöscht. Neu installiert und ich konnte die Seite wieder nicht laden. Das ganze mache ich über Safari auf einem Mac. Dann nichts verändert und parallel über Chrome auf die Seite zugegriffen und es geht! Danach ging es auch auf Safari. Ich raff es nicht, aber es läuft. --------edit2------- hmmm, hab gedacht ich könne die Sprache auf deutsch umstellen. In der Config gibt es die Zeile: language: "en", Ich habe mal "de" eingetragen. Geht aber nicht. Zurück auf "en" geändert und nichts läuft mehr. Ganz seltsam. Selbst wenn es nicht "de" heisst, dann müsste es doch wenigstens zurück auf "en" laufen.
    1 point
  25. I would like to apologize to everyone for the way I posted my frustrations. I truly regret it and I deserve to. After a couple of days of testing, I was very happy that the UnRAID OS never failed in the slightest. The WebUI might be a different story. To be brief; I provisioned an SSL certificate via UnRAID and then didn’t want the “public” record of my WebUI address being available (least of all via “unraid.net”). My valid/proven “unraid.mydomain.tld” certificate worked (for a bit?), but it has no DNS/Public record. I’m pretty sure this screwed up the UnRAID flash drive’s ability to (phone home) confirm my valid license. The “account” comment was regarding the UnRAID WebUI pushing the “key.unraid.net” certificate. Again, I’m very sorry. I keep thinking this stuff is simple, but it’s not. 6.
    1 point
  26. ...nutze es zwar selbst auch nicht, aber das ist - wie bei IPv4 keine Frage des "findens"....Du solltest ipv6 auf dem unraid host aktivieren (unter network-settings, network protocol) und dann solltest Du aus dem ipv6er Pool Deines Anbieters auch eine IPv6 vergeben können. Im Zweifel weiss Dein router welche noch frei, nicht vergeben sind und auch ausserhalb des dhcp6-pools sind
    1 point
  27. Puh ipv6 nutz ich in Unraid nicht, da weiß ich jetzt gerad auch nicht weiter
    1 point
  28. I connected all the power cords for the cpu and the motherboard, and later found out that it was a memory problem.
    1 point
  29. For unraid, I think the obvious configuration would be an array with 1 parity using the 3.5HDD, giving you 12TB storage and then the 1TB NVME as Cache Pool. Appdata (used by Docker Containers) and Domain (used by default for VMs) shares will automatically be configured to use the cache pool.
    1 point
  30. hi @Masterwishx! that's correct, when mounting gdrive using remote smb, the host machine with the google drive application must be online for unraid to find and use it. there are some containers in community applications that allow you to access your gdrive, but after looking at one or two of them, i was not impressed with the security work arounds incorporated by the containers. in fact, one container author even offered full disclosure that his private server was used to handle some of the traffic involved. while i appreciate that disclosure, i felt it was too insecure. for now, i still think this is the best/ safest method.
    1 point
  31. Ding...ding...winner-winner, chicken dinner back to 6.9.1 and it works.......thanks again
    1 point
  32. You guys, this is the weirdest thing. My UnRaid server has a 6800 and a 6800XT. Both have been working perfectly since February. Yesterday, I installed water blocks on both cards. Since I did that, now, reset doesn't work anymore. If I reboot the VM, the display doesn't come back, and one CPU thread assigned to the VM gets stuck at around 87%. Changing the cooler on these cards couldn't possibly cause this, right? The only other thing I did was install an NVME SSD in the M.2 slot. Do you think that could cause reset to fail for both cards? Edit: False alarm. It was because the new SSD changed the IOMMU groups and for some reason, UnRaid stopped stubbing the serial bus controller for each card. This was causing reset to not work. Just in case anyone needs to know, or in case I forget this again (lol), these are the devices that need to be passed through for an AMD reference card. Obviously, your PCIe IDs will be different. AMD Radeon RX 6800/6800 XT / 6900 XT (0a:00.0) AMD Device (0a:00.1) <--- sound card AMD Device | USB controller (0a:00.2) AMD Device | Serial bus controller (0a:00.3)
    1 point
  33. @limetech is it possible to revert/disable these changes so we can look to see if its kernel/driver specific. emhttpd: detect out-of-band device spin-up I have reverted to 6.9.1 for now. For info I have replaced doron's Smartctl wrapper with r5215 of smartctl and its working fine in 6.9.1. and 6.9.2 for both SAS and SATA. Could it be updated for 6.10 or next 6.9 release? root@Tower:/usr/sbin# ls smart* smartctl* smartctl.doron* smartctl.real* smartd* root@Tower:/usr/sbin# smartctl smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI) Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org ERROR: smartctl requires a device name as the final command-line argument. Use smartctl -h to get a usage summary root@Tower:/usr/sbin# smartctl -n standby /dev/sde smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI) Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org Device is in ACTIVE or IDLE mode root@Tower:/usr/sbin# smartctl -n standby /dev/sdf smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI) Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org Device is in STANDBY BY COMMAND mode, exit(2) root@Tower:/usr/sbin#
    1 point
  34. Still 43? sudo apt install backport-iwlwifi-dkms Did you try this one?
    1 point
  35. Have a look at this thread. You need to add options for the i915 driver.
    1 point
  36. If you want to have 3 domains all point to your public WAN IP it is very easy to do. The Cloudflare DDNS container would update your main domains IP (yourdomain1.com ) by means of a subdomain . For example you would set the container to update the subdomain dynamic.yourdomain1.com to be your public WAN IP So then dynamic.yourdomain1.com will always be pointing to your public WAN IP. Then any other domains or subdomains that you want to point to your public WAN IP, you just make a CNAME in your cloudflare account which points to dynamic.yourdomain1.com So for example you could point and subdomain for example www.yourdomain2.com to dynamic.yourdomain.1.com However many domains like to use a “naked” domain (which is a regular URL just without the preceding WWW) so for example you can type google.com and goto google without having to type www.google.com. This "naked" domain is the root of the domain. DNS spec expects the root to be pointing to an IP with an A record. However Cloudflare allows the use of a CNAME at root (without violating the DNS spec) by using something called CNAME flattening which enables us to use a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record. So therefore you can point the "naked" root domain to a CNAME and could use yourdomain2.com pointing by CNAME to dynamic.yourdomain1.com So with your 3 domains you can point any subdomain or the root domain to the (cloudflare ddns docker container updated on unraid) dynamic.yourdomain1.com But you don't need to use Cloudflare DDNS if you don't want to. Instead of using the Cloudflare DDNS container you could use the DuckDNS container. Then for the subdomain dynamic.yourdomain1.com instead of having its a record updated by the Cloudflare container you would use a CNAME for dynamic.yourdomain1.com to point to yourdomain.duckdns.org (or whatever your DuckDNS name was) So basically its just a chain of things that eventually resolve an IP which is your public WAN IP. Also Swag reverse proxy allows you to make letsencrypt certs for not just one domain but multiple domains too. I hope that makes sense
    1 point
  37. So this is interesting, apparently my Traktarr Docker is basically a super computer and is using >9EB of RAM: I don't even remember installing that much RAM and it seems to be holding the entirety of the internet in memory...
    1 point
  38. This has nothing to do with it, since the Penryn emulated cpu is within the vm, and you need nested virtualization enabled in the host. As far as I know this is still not possible (and I doubt it will be in the future..) in mac os + amd cpu (mac os Hypervisor.framework doesn't support AMD-V). Moreover you can't have nested virtualization with emulated cpu, you need host-model or host-passthrough, which with amd can be quite complicated.
    1 point
  39. Good to see you back! And hope you're feeling better.
    1 point
  40. I know that this is an old post, but for everyone who still has the same problem of rtc working on the 1st day and not the second, this is my version of the code: echo 0 > /sys/class/rtc/rtc0/wakealarm time=5:58 now=$(date +%s) other=$(date -d $time +%s) if [ $now -ge $other ] then echo `date '+%s' --date='tomorrow 5:58:00'` > /sys/class/rtc/rtc0/wakealarm else echo `date '+%s' --date='today 5:58:00'` > /sys/class/rtc/rtc0/wakealarm fi This resets the rtc clock which will make the code work every day. echo 0 > /sys/class/rtc/rtc0/wakealarm I use this code because the rtc function in my bios doesn't wake UNRAID from sleep. But this code worked a treat! If you wish to change the time, change all of the time values to your desired time. This code also has to be placed in the 'custom commands before sleep' text box.
    1 point
  41. I purchased the ASRock Rack E3C246D4U and there is a way to enable the iGPU without installing the beta BIOS. I'm currently running P2.30 with iGPU enabled. There is a key combination you need to press when booting your system. After powering on the boot splash screen will display the ASRock Rack image and the message “Updating FRU system devices”. When you see "Updating FRU system devices" press ctrl+alt+F3 and it will load the BIOS menu. In BIOS menu, you will see an additional page labeled IntelRC Chipset. Select System Agent (SA) Configuration, then Graphics Configuration, and then Enable IGPU Multi-Monitor.
    1 point
  42. Here is another key element to the UUD Version 1.5. This new datasource should have been created WITHIN GRAFANA if you follow Varken's default installation instructions. However, I thought it would be helpful for everyone to see it! NEW DATASOURCE FOR THE UUD 1.5 (VARKEN): Once setup, you should see 2 datasources. The default one we used for UUD 1.4 and prior (yours may be named differently than mine), and the new one named "Varken" which will be required for UUD version 1.5 onward if you want real time Plex monitoring.
    1 point
  43. Same. I now have catalina (instead of Big Sur which is what I chose) running though and I see the apple update thing for Big Sur. Can I update from with Mac OS Catalna to Big Sur or is that gonna cause problems? **EDIT** I completely deleted then reinstalled the container with method 2 instead of just changing after the vm was installed and it worked.
    1 point
  44. Self-replying as this is now resolved and I can get a sustained 300mbit+. The solution was to add a new variable to the Syncthing docker container named UMASK_SET with a value of 000. I'm using the linuxserver.io docker image and it appears permissions were the cause of the speed issues.
    1 point
  45. There are a few in this thread that have the same array/shutdown issue, but you've got a point that it may not be enough to pull it immediately. There is another thread on this forum about the startup message, and I cannot fathom how many people might be experiencing "can't shutdown" issues. I know I searched through the forum enough times and trawled through logs to find my own conclusion. Still, I still think it's important to highlight this link, so that others might find it. I disagree that this doesn't warrant further discussion, it definitely does, but certainly less about the developer abandoning his project. I've been trying to investigate the codebase to find why the issues are encountered, already posted some of my own findings, and I would encourage anyone with deeper knowledge of unraid to chime in on this as maybe it can be fixed and submitted as a pull-request to the repo.
    1 point
  46. As docker's --link option is decrepated, we're no longer able to use multi-container apps like owncloud. Their only approach is now via compose. Am I wrong here? Also: people arguing that compose "is just a command line tool", while docker works in somewhat of the same way. It's a service with a CLI and some people built gui's, like the unraid guys. There are gui projects for compose going on outthere. No reason why the Unraid team shouldn't see this as a future project. 😉 And docker-compose is no longer available in the nerdpack. As I try not to mess with my unraid system via command line tampering, I'm reluctant to move to compose. So now I'm getting more & more restricted in what container images I can use...
    1 point
  47. The problem is when you have something like this https://github.com/mailcow/mailcow-dockerized/blob/master/docker-compose.yml It would be nice to have in dockerman a way to configure something the way docker compose does it, as a group. In the UI all the dockers could be nested in only one, and lets say you will configure all the dockers in the same template. It should be hard since more changes are in the UI. Somehow dockerman teamplates could be as a dockercompose yml
    1 point
  48. is an example where something is not supported. the workaround is a hack.
    1 point
  49. I've tested these two options; 1) this Piwigo docker, accessing a separated mariadb instance. Rather complex and slow loading large amounts of images. 2) Piwigo, mariadb, nginx, php-fpm and CSF, all on 1 debian minimal VM. Both instances of piwigo access the exact same folders from an unraid share with terabytes of imagefiles. The second option performs noticeably faster, even without doing proper IO tests etc. The difference is so obvious, that I'm not even going to bother testing it with tools. Could be because I run unraid with a decent Xeon and 32GB RAM, but still; I don't see any advantage over docker instances for piwigo, and I just wanted to share that, because frankly, setting up piwigo in that VM was so much easier, other than maybe using a little fewer resources I don't understand what all the fuss on having it as a docker instance is about. The SSL/TLS cert for NGINX is located in an Unraid Share with Unraid Mount tag in the VM. Same LetsEncrypt wildcard cert I use for the unraid UI. So no weird proxying or network complexities. Plus, a csf/lfd firewall in front of the piwigo server VM, allowing me to serve the stuff through my internet-router to the world.
    1 point