Jump to content


Popular Content

Showing content with the highest reputation on 06/13/20 in all areas

  1. 2 points
  2. 2 points
    The WireGuard implementation itself is always loaded as part of the system kernel. The plugin is for the GUI front-end and configuration part only. Removing the plugin will not remove any already existing WireGuard configurations. In a future version of Unraid, this plugin will be integrated. The reason we started with the plugin was to make the development of WireGuard support independent of the Unraid releases, but we are getting to a point where integration can be done (not yet in Unraid 6.9 though). WireGuard configuration should have no effect on local connections, i.e. connecting from your LAN network. The WireGuard configuration files are stored on the flash device in the folder /config/wireguard. Deleting this folder and rebooting the system is like starting without WireGuard.
  3. 2 points
    I agree that it's not okay to complain about not hearing anything since B1. If for no other reason than this is a crazy time (I've thought a few times "man, I hope those guys aren't sick"). But I would say that it's a missed opportunity to not share at least something of what's happening behind the scenes. This is a pretty committed and passionate user-base, and some level of sharing would only strengthen it. I work in big-tech myself, so I get the struggle of figuring out how much to share with your customers. A monthly blog with some latest development details, near-term roadmap info, things to look forward to with unraid, etc, would go a long way IMHO.
  4. 1 point
    Hello! Now that it is possible to add your own images as case icons, I thought I'd start taking requests to make icons that match the style of the other icons. To make a request, please consider the following: 1. I do this on my own spare time, so I will probably not give an ETA 2. I will need the case manufacturer and model name. 3. I will need a picture (preferably straight from the front) 4. If you have something custom I'll give it a shot but I can't make any promises. For reference, these are the icons I've currently made. (some of them will appear in a later update). Update: It may be getting a bit cumbersome to find all the icons I've added so here is an updated overview: Cheers! Mex
  5. 1 point
    Ah i was writing the reply when you posted this. Glad its working for you
  6. 1 point
    Ah its an easy mistake to make , I have done the same before myself many times before You can change the grafana port or the rocket chat port. If you change the rocket chat port you will have to make a change in reverse proxy config file too as the confile file expects the port that Rocket chat is using to be 3000. So its this bit here you would need to change. server { listen 443 ssl; server_name rocketchat.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { resolver valid=30s; set $upstream_app Rocket.Chat; set $upstream_port 3000; <<<<<<<<<<<<<<<<<<<<<<<<you would have to change the port here to match whats specified in the template>>>>>>> set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Nginx-Proxy true; proxy_redirect off; } } server { listen 80; server_name rocket.*; return 301 https://$host$request_uri; }
  7. 1 point
    So you get an error like this when you click on start for the container? Most likely you have another container running with the same port in use. Is anything else using port 3000? Grafana also uses port 3000 by default.
  8. 1 point
    Hi Ich777, First off, thanks for helping us gamers who want to serve on unraid. Your work on these dockers is very much appreciated! Bare with me as I'm pretty technical in the windows arena and with client and server hardware, but am quite a neophyte with unraid/linux in general. Like many others who love the low low price of free, I installed Ark Survival on Epic as opposed to Steam because it's hard to say no to free (Steam, wanted $50 for it). I played it with a friend on a non-dedicated server and we decided to setup a dedicated server. Well that turned out to be quite a chore in windows and so I checked community apps for a docker and boom! I installed yours. I can see it under servers in Steam (under view | servers | LAN), but can't when running Ark Survival in Epic (chose unofficial servers and see password protected servers). I have the ports needed to run the dedicated server forwarded on my router for my unraid server, but I don't own the game on steam (I left the steam credentials blank when setting up the docker for the server). Do I need to own a client copy on steam in order to join my hosted server via your super helpful Ark Survival server docker container on unraid? Is there a way for me to host this game in your docker and see it with an Epic client install of Ark Survival? We can continue to play via an un-dedicated server in windows through Epic on Windows 10, but like the idea of a perpetual server running regardless if my gaming PC is on or not (plus Linux FTW!). I suppose I could create a Windows 10 VM and tinker with running a dedicated server that way on top of unraid, but like the low overhead and reliability of running it via docker if possible. Thanks again for your work on this thus far! Thinking on this further, if steam only shows my docker server under LAN, does that mean the game isn't being seen outside of my network? I had the ports open on my Windows 10 gaming system and we managed to connect to one another remotely (we had hamachi off) and it worked fine that way through Epic via a non-dedicated server. I moved those port forwarding rules to my unraid server's IP to no avail. -Neil
  9. 1 point
  10. 1 point
    Got it need to use deluder <username>@meet.jitisi prosodyctl --config /config/prosody.cfg.lua deluser username@meet.jitsi
  11. 1 point
    Ok, still poking things to see what happens. I am sure others have already done most of this testing but didn't have time to read the whole thread. So while moving the docker image to an XFS formatted array drive (decided to use unbalance instead of mover so I can watch the progress) dockers were disabled obviously. With the dockers disabled entirely, I noticed zero writes to btrfs-transacti or loop after 5 minutes as you would except. In fact I saw no writes of any kind, exactly as I would expect. When I tried to start docker after moving it to XFS array drive, it would not start, so I restarted the server. It then started up ok. So now watching the writes with docker on an XFS array drive. After 10 mins I am at 50mb writes to loop2 and 120mb writes to btrfs-transacti, which netdata shows is indeed still being written to the cache. That is still over 5TB a year to the cache for no reason at all. By increasing the dirty writeback time to 3 mins it drops the writes in 10 minutes to 35mb for loop2 and 70mb for btrfs-transacti. Not nearly as much of a drop as when docker was on the cache but still noticeable. This is confirmed by the writes increasing on the cache in the main view. In 45 mins of up time I have about 7500 writes on both the array drive with the docker and my cache pool. BTW, what does each "write" represent in the main view? If I disable the docker service, then btrfs-transacti doesn't even show up. Anyone that has gone the path of mapping the docker image directly without loop 2, does btrfs-transacti still cause writes to the cache? I might have to go that route if the official fix is months away. Was checking the smart status of my SSD's and in just over a week since installing unraid they all ticked down 1 of the lifetime in the smart stats. Took me well over a year to do that when using windows. Sure glad I caught it now.
  12. 1 point
    Il faut mettre à jour les droits des "User shares". Voir Tools -> New Permissions et choisi les "User Shares" qui ne sont pas accessibles.
  13. 1 point
    Wouldn’t be my cup of tea. As an option, maybe, but just because lots of services are moving to the cloud doesn’t mean that every service has to. if I have a problem with my internet, I can’t boot or access my storage? No thank you! Cloud is good, and has benefits in lot of cases. Local is better for some things still.
  14. 1 point
    Hallo, danke für deine Rückmeldung. Ich schreib dir eine kurze PN und dann können wir drüber diskutieren, wir sind für alle verbesserungsvorschläge offen. EDIT: diese screenshots sind nicht mehr aktuell
  15. 1 point
    Array Datenträger? Diese Mischung aus Englisch und Deutsch ist fürchterlich. Entweder Array Disks belassen oder komplett Plattensubsystem (siehe Wikipedia). Da Array innerhalb Unraid ein fester Begriff ist würde ich ersteres belassen. Auch die Datenträgerverwaltung in Windows kennt z.B. Volumes. Lokalisierte Hilfeseiten und Wikis, die auf diese Begriffe eingehen, sind IMHO viel wichtiger als ein paar Überschriften von Tabellen in einem technischen Werkzeug. In diesem Sinne würde ich Kernbegriffe wie Array, Disk, Share, Plugin, Container, VM, User, etc belassen. Übersetzt man diese so läuft man unweigerlich in die Denglish Falle. Meine Meinung.
  16. 1 point
  17. 1 point
    I was just about to do that, thanks!
  18. 1 point
    As always, when it is available in the unifi repo.
  19. 1 point
    I think WireGuard always loads during the boot sequence once it has been activated as it is a system service. It is only the GUI part of WireDuard that is a plugin. However that does not rule out there being some stored WireGuard configuration information causing a problem
  20. 1 point
    Yes, that's one the plugins that I though could be causing issues, if misconfigured, though booting in safe mode should not load it.
  21. 1 point
    Then I would recommend redoing the flash drive, backup current one, re-do it with the USB tool, then restore only super.dat (disk assignments) and your key, both on the config folder, if OK you can then start restoring the other files to find out which one is the problem or just reconfigure the server, if redoing the flash doesn't work you likely have a hardware problem.
  22. 1 point
    ok so here's the problem, i dont have a discrete graphics card and therefore its impossible for me to debug any hardware transcoding issues, i can take a look at the image and see if i can spot anything obvious, but im really shooting in the dark here.
  23. 1 point
    you need to go to the top of the container configuration page and toggle basic view to advanced view
  24. 1 point
    not to my knowledge. The official docker container works ok but I didnt find a reliable way to migrage the database so I have to start from scratch which is unfortunate as I lost all my user settings and watch statuses and such. I would love an update from Binhex himself at least
  25. 1 point
    Update: there is a lot going one Yes, there are always plans
  26. 1 point
    That's an unbelievably selfish view to have on this. We would all like a new release with new features, bugs fixed, etc. but it's not as simple as just flicking a switch and with the current world pandemic things are likely to take even longer. The fact that your only post on here is to bitch that it's been 3 months since a release speaks volumes about how little you know about development and thus aren't in the position to be able to judge.
  27. 1 point
    I just updated the first post with this, but I thought it might be useful to add it here as well: Updated overview of all the icons added so far:
  28. 1 point
    I keep revisiting this question myself. I really miss the proper features of an enterprise or semi-pro virtualisation system like VMWare / Proxmox (and even FreeNAS recognising it's not so great for VM's / docker, but does have extensive configuration capability). I also miss proper file sharing support and domain controllers which may sound harsh, but Unraid barely scrapes this into an acceptable solution. I miss proper user accounts and proper directory synchronisation and proper file transfer speeds. I miss true bare metal, native ZFS support in the above respective cases, docker / VM dependency mapping, Virtual Machine backups, snapshotting, LXC, proper status graphs, and more integrated VM info via the qemu client toolset. I also miss a proper functioning GUI networking configuration tool and a reliable cache. It's quite frustrating when you know about this stuff and it's either not available or only nearly there. However, I do love unraid's disk expandability which even though it kills performance, is more than adequate as a media store, I love UNRAID's GPU passthrough and all other hardware passthrough, which is clearly in a class of it's own, it's simplicity for pinning cores to VM's and docker containers or excluding them altogether. I love its flexibility with adding and removing disks, its community of add on features that make things quick and easy via docker, the way lime tech diligently make it work for all sorts of hardware with barely a grumble and I love it's price. I mostly like its Docker GUI and it's VM template setup. I love how extensible it is, with the community plugins and what people manage to do with it. I love how it has frequent updates and quickly passes on new kernel features. I also love that they've clearly spent time thinking about making the support easy and that the support is included in the price. So depending on the use case it's either an insane an incredibly hard to beat package for the price, or completely missing the necessary features. It's hard to think of another product that's so conflicting in it's capabilities with the norm, you'd be forgiven for thinking that they skipped over all the important stuff to get to the fun stuff. But then you realise the gold in this. Somehow, limetech have managed to take a true snapshot of their target market customer. The customer that doesn't want, care or even know about those other things. And when you realise that, you realise just how good of a job they've done. IT companies now are still struggling to get as closely aligned to it's customer as limetech have. I just wish they'd offer up a plus version that has the typical virtualisation options, I'd be buying that and be happy at the price.
  29. 1 point
    That's about it. You can skip step 8 😄 You can skip steps 2,3,4 if you don't write any new data to your server while doing this. In fact, it's probably simpler if you don't exclude any disks globally then there won't be any confusion if you rearrange things. And the unBALANCE plugin might be an easier way to accomplish step 5. I haven't looked at that video but I know some configurations of Krusader won't let you work directly with the disks, which is what you need to do here. You should never work with user shares and disks at the same time or you could lose data. As for step 10 and your last question, yes you can rearrange the DATA disks. The main thing you MUST NOT do is assign a data disk to the parity slot since it would be overwritten. Some people are afraid New Config wipes out everything they have done. The only thing it will do is allow you to change your disk assignments, and it won't even write anything except (optionally) parity. And you do have to rebuild parity when removing disks. Just in case, if it offers to format anything at any point, DON'T.
  30. 1 point
    Sorry to kick this topic as it is already marked solved but I'm still a bit puzzled - because I'm new to unraid everything is "scary" when making changes like this. If I read and understand everything correctly then this is what i should do to remove one (or more) disks at once (I copied this list partially from @Luca 1. make screenshot from your array (disk locator plugin is very useful here) 2. stop the array. 3. exclude the disk(s?) at the global share settings. 4. restart the array 5. move all the data from the disk(s?) you want to remove to other disks pref. using krusader (great vid by SpaceInvaderOne on youtube!) 6. stop array 7. physically remove the disk(s?) from the existing array 8. start array (but not possible since there is/are missing disk(s) so go to 9. 9. run tools / new config (keeping existing parity disk(s)) 10. assign the remaining disks (I assume the remaining disks needs to be assigned at the exact same place where they were - hence the screenshot) 11. start array / run parity-sync So euhm.. If I understand the procedure correctly it does not matter if you yank out more then one disk at once as long as the date from those disk has been moved to other disks. Parity needs to be rebuild anyways so.. and yes the array in not protected by parity at this point. From my understanding this is only possible because unraid is not a raid but a jbod with normal filesystem on it.. Also.. Can this procedure also be used to rearrange the physical disks (move them into an other physical slots of the array) or does unraid not care where the disks are as long as the order in the unraid-config stays the same? Could someone confirm this.. edit 16feb2019: 12. Do not forget to re-set the global share to include all disks and exclude none if you want unraid to use any new disks in those "slots". I added this morning 3x 3TB from my old rig and was wondering why I had a disk-share named "disk2" and why disk 2 was not spinning up. Disk two was the one I removed couple of days ago.