Leaderboard

Popular Content

Showing content with the highest reputation on 06/13/20 in Posts

  1. The WireGuard implementation itself is always loaded as part of the system kernel. The plugin is for the GUI front-end and configuration part only. Removing the plugin will not remove any already existing WireGuard configurations. In a future version of Unraid, this plugin will be integrated. The reason we started with the plugin was to make the development of WireGuard support independent of the Unraid releases, but we are getting to a point where integration can be done (not yet in Unraid 6.9 though). WireGuard configuration should have no effect on local connections, i.e. connecting from your LAN network. The WireGuard configuration files are stored on the flash device in the folder /config/wireguard. Deleting this folder and rebooting the system is like starting without WireGuard.
    2 points
  2. I agree that it's not okay to complain about not hearing anything since B1. If for no other reason than this is a crazy time (I've thought a few times "man, I hope those guys aren't sick"). But I would say that it's a missed opportunity to not share at least something of what's happening behind the scenes. This is a pretty committed and passionate user-base, and some level of sharing would only strengthen it. I work in big-tech myself, so I get the struggle of figuring out how much to share with your customers. A monthly blog with some latest development details, near-term roadmap info, things to look forward to with unraid, etc, would go a long way IMHO.
    2 points
  3. Hello! Now that it is possible to add your own images as case icons, I thought I'd start taking requests to make icons that match the style of the other icons. To make a request, please consider the following: 1. I do this on my own spare time, so I will probably not give an ETA 2. I will need the case manufacturer and model name. 3. I will need a picture (preferably straight from the front) 4. If you have something custom I'll give it a shot but I can't make any promises. For reference, these are the icons I've currently made. (some of them will appear in a later update). Update: It may be getting a bit cumbersome to find all the icons I've added so here is an updated overview: Cheers! Mex
    1 point
  4. Ah i was writing the reply when you posted this. Glad its working for you
    1 point
  5. Ah its an easy mistake to make , I have done the same before myself many times before You can change the grafana port or the rocket chat port. If you change the rocket chat port you will have to make a change in reverse proxy config file too as the confile file expects the port that Rocket chat is using to be 3000. So its this bit here you would need to change. server { listen 443 ssl; server_name rocketchat.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { resolver 127.0.0.11 valid=30s; set $upstream_app Rocket.Chat; set $upstream_port 3000; <<<<<<<<<<<<<<<<<<<<<<<<you would have to change the port here to match whats specified in the template>>>>>>> set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Nginx-Proxy true; proxy_redirect off; } } server { listen 80; server_name rocket.*; return 301 https://$host$request_uri; }
    1 point
  6. So you get an error like this when you click on start for the container? Most likely you have another container running with the same port in use. Is anything else using port 3000? Grafana also uses port 3000 by default.
    1 point
  7. Hi Ich777, First off, thanks for helping us gamers who want to serve on unraid. Your work on these dockers is very much appreciated! Bare with me as I'm pretty technical in the windows arena and with client and server hardware, but am quite a neophyte with unraid/linux in general. Like many others who love the low low price of free, I installed Ark Survival on Epic as opposed to Steam because it's hard to say no to free (Steam, wanted $50 for it). I played it with a friend on a non-dedicated server and we decided to setup a dedicated server. Well that turned out to be quite a chore in windows and so I checked community apps for a docker and boom! I installed yours. I can see it under servers in Steam (under view | servers | LAN), but can't when running Ark Survival in Epic (chose unofficial servers and see password protected servers). I have the ports needed to run the dedicated server forwarded on my router for my unraid server, but I don't own the game on steam (I left the steam credentials blank when setting up the docker for the server). Do I need to own a client copy on steam in order to join my hosted server via your super helpful Ark Survival server docker container on unraid? Is there a way for me to host this game in your docker and see it with an Epic client install of Ark Survival? We can continue to play via an un-dedicated server in windows through Epic on Windows 10, but like the idea of a perpetual server running regardless if my gaming PC is on or not (plus Linux FTW!). I suppose I could create a Windows 10 VM and tinker with running a dedicated server that way on top of unraid, but like the low overhead and reliability of running it via docker if possible. Thanks again for your work on this thus far! Thinking on this further, if steam only shows my docker server under LAN, does that mean the game isn't being seen outside of my network? I had the ports open on my Windows 10 gaming system and we managed to connect to one another remotely (we had hamachi off) and it worked fine that way through Epic via a non-dedicated server. I moved those port forwarding rules to my unraid server's IP to no avail. -Neil
    1 point
  8. Got it need to use deluder <username>@meet.jitisi prosodyctl --config /config/prosody.cfg.lua deluser [email protected]
    1 point
  9. Il faut mettre à jour les droits des "User shares". Voir Tools -> New Permissions et choisi les "User Shares" qui ne sont pas accessibles.
    1 point
  10. Wouldn’t be my cup of tea. As an option, maybe, but just because lots of services are moving to the cloud doesn’t mean that every service has to. if I have a problem with my internet, I can’t boot or access my storage? No thank you! Cloud is good, and has benefits in lot of cases. Local is better for some things still.
    1 point
  11. Hallo, danke für deine Rückmeldung. Ich schreib dir eine kurze PN und dann können wir drüber diskutieren, wir sind für alle verbesserungsvorschläge offen. EDIT: diese screenshots sind nicht mehr aktuell
    1 point
  12. Array Datenträger? Diese Mischung aus Englisch und Deutsch ist fürchterlich. Entweder Array Disks belassen oder komplett Plattensubsystem (siehe Wikipedia). Da Array innerhalb Unraid ein fester Begriff ist würde ich ersteres belassen. Auch die Datenträgerverwaltung in Windows kennt z.B. Volumes. Lokalisierte Hilfeseiten und Wikis, die auf diese Begriffe eingehen, sind IMHO viel wichtiger als ein paar Überschriften von Tabellen in einem technischen Werkzeug. In diesem Sinne würde ich Kernbegriffe wie Array, Disk, Share, Plugin, Container, VM, User, etc belassen. Übersetzt man diese so läuft man unweigerlich in die Denglish Falle. Meine Meinung.
    1 point
  13. I was just about to do that, thanks!
    1 point
  14. As always, when it is available in the unifi repo.
    1 point
  15. I think WireGuard always loads during the boot sequence once it has been activated as it is a system service. It is only the GUI part of WireDuard that is a plugin. However that does not rule out there being some stored WireGuard configuration information causing a problem
    1 point
  16. Yes, that's one the plugins that I though could be causing issues, if misconfigured, though booting in safe mode should not load it.
    1 point
  17. Then I would recommend redoing the flash drive, backup current one, re-do it with the USB tool, then restore only super.dat (disk assignments) and your key, both on the config folder, if OK you can then start restoring the other files to find out which one is the problem or just reconfigure the server, if redoing the flash doesn't work you likely have a hardware problem.
    1 point
  18. ok so here's the problem, i dont have a discrete graphics card and therefore its impossible for me to debug any hardware transcoding issues, i can take a look at the image and see if i can spot anything obvious, but im really shooting in the dark here.
    1 point
  19. you need to go to the top of the container configuration page and toggle basic view to advanced view
    1 point
  20. not to my knowledge. The official docker container works ok but I didnt find a reliable way to migrage the database so I have to start from scratch which is unfortunate as I lost all my user settings and watch statuses and such. I would love an update from Binhex himself at least
    1 point
  21. Update: there is a lot going one Yes, there are always plans
    1 point
  22. That's an unbelievably selfish view to have on this. We would all like a new release with new features, bugs fixed, etc. but it's not as simple as just flicking a switch and with the current world pandemic things are likely to take even longer. The fact that your only post on here is to bitch that it's been 3 months since a release speaks volumes about how little you know about development and thus aren't in the position to be able to judge.
    1 point
  23. I just updated the first post with this, but I thought it might be useful to add it here as well: Updated overview of all the icons added so far:
    1 point
  24. I keep revisiting this question myself. I really miss the proper features of an enterprise or semi-pro virtualisation system like VMWare / Proxmox (and even FreeNAS recognising it's not so great for VM's / docker, but does have extensive configuration capability). I also miss proper file sharing support and domain controllers which may sound harsh, but Unraid barely scrapes this into an acceptable solution. I miss proper user accounts and proper directory synchronisation and proper file transfer speeds. I miss true bare metal, native ZFS support in the above respective cases, docker / VM dependency mapping, Virtual Machine backups, snapshotting, LXC, proper status graphs, and more integrated VM info via the qemu client toolset. I also miss a proper functioning GUI networking configuration tool and a reliable cache. It's quite frustrating when you know about this stuff and it's either not available or only nearly there. However, I do love unraid's disk expandability which even though it kills performance, is more than adequate as a media store, I love UNRAID's GPU passthrough and all other hardware passthrough, which is clearly in a class of it's own, it's simplicity for pinning cores to VM's and docker containers or excluding them altogether. I love its flexibility with adding and removing disks, its community of add on features that make things quick and easy via docker, the way lime tech diligently make it work for all sorts of hardware with barely a grumble and I love it's price. I mostly like its Docker GUI and it's VM template setup. I love how extensible it is, with the community plugins and what people manage to do with it. I love how it has frequent updates and quickly passes on new kernel features. I also love that they've clearly spent time thinking about making the support easy and that the support is included in the price. So depending on the use case it's either an insane an incredibly hard to beat package for the price, or completely missing the necessary features. It's hard to think of another product that's so conflicting in it's capabilities with the norm, you'd be forgiven for thinking that they skipped over all the important stuff to get to the fun stuff. But then you realise the gold in this. Somehow, limetech have managed to take a true snapshot of their target market customer. The customer that doesn't want, care or even know about those other things. And when you realise that, you realise just how good of a job they've done. IT companies now are still struggling to get as closely aligned to it's customer as limetech have. I just wish they'd offer up a plus version that has the typical virtualisation options, I'd be buying that and be happy at the price.
    1 point
  25. That's about it. You can skip step 8 😄 You can skip steps 2,3,4 if you don't write any new data to your server while doing this. In fact, it's probably simpler if you don't exclude any disks globally then there won't be any confusion if you rearrange things. And the unBALANCE plugin might be an easier way to accomplish step 5. I haven't looked at that video but I know some configurations of Krusader won't let you work directly with the disks, which is what you need to do here. You should never work with user shares and disks at the same time or you could lose data. As for step 10 and your last question, yes you can rearrange the DATA disks. The main thing you MUST NOT do is assign a data disk to the parity slot since it would be overwritten. Some people are afraid New Config wipes out everything they have done. The only thing it will do is allow you to change your disk assignments, and it won't even write anything except (optionally) parity. And you do have to rebuild parity when removing disks. Just in case, if it offers to format anything at any point, DON'T.
    1 point
  26. Sorry to kick this topic as it is already marked solved but I'm still a bit puzzled - because I'm new to unraid everything is "scary" when making changes like this. If I read and understand everything correctly then this is what i should do to remove one (or more) disks at once (I copied this list partially from @Luca 1. make screenshot from your array (disk locator plugin is very useful here) 2. stop the array. 3. exclude the disk(s?) at the global share settings. 4. restart the array 5. move all the data from the disk(s?) you want to remove to other disks pref. using krusader (great vid by SpaceInvaderOne on youtube!) 6. stop array 7. physically remove the disk(s?) from the existing array 8. start array (but not possible since there is/are missing disk(s) so go to 9. 9. run tools / new config (keeping existing parity disk(s)) 10. assign the remaining disks (I assume the remaining disks needs to be assigned at the exact same place where they were - hence the screenshot) 11. start array / run parity-sync So euhm.. If I understand the procedure correctly it does not matter if you yank out more then one disk at once as long as the date from those disk has been moved to other disks. Parity needs to be rebuild anyways so.. and yes the array in not protected by parity at this point. From my understanding this is only possible because unraid is not a raid but a jbod with normal filesystem on it.. Also.. Can this procedure also be used to rearrange the physical disks (move them into an other physical slots of the array) or does unraid not care where the disks are as long as the order in the unraid-config stays the same? Could someone confirm this.. edit 16feb2019: 12. Do not forget to re-set the global share to include all disks and exclude none if you want unraid to use any new disks in those "slots". I added this morning 3x 3TB from my old rig and was wondering why I had a disk-share named "disk2" and why disk 2 was not spinning up. Disk two was the one I removed couple of days ago.
    1 point