Leaderboard

Popular Content

Showing content with the highest reputation on 01/10/21 in all areas

  1. I have updated the videos. Apologies for the error. I will say, the UUD is far better than whatever I had before
    3 points
  2. @smidley @madelectron any anyone else wanting to test Intel iGPU support: First install the Intel GPU Top plugin from @ich777 -- This is NOT the docker version on Community Apps. It has to be the plugin version by ich777. Then uninstall my current plugin and manually install this one - https://raw.githubusercontent.com/b3rs3rk/gpustat-unraid/dev-intel-test/gpustat.plg Make sure you go to settings and set it for Intel. If you have a valid iGPU, it should show up on the settings page. If not, message me the output of the following command run from the UnRAID console: lspci | grep VGA If your iGPU is detected, but you're not getting any data on the dashboard, go to the UnRAID console and type/copy paste the following and send me the result: cd /usr/local/emhttp/plugins/gpustat php ./gpustatus.php If you are getting some, but not all the data you'd like to see review the UnRAID console output of: intel_gpu_top If the metric I display on the dashboard is not showing up for you in intel_gpu_top your CPU/iGPU/Chipset doesn't support monitoring that. Recommend disabling that particular metric in your plugin settings. If the metric shows up in intel_gpu_top but still doesn't display on the dashboard, send me the output of: timeout -k .500 .400 intel_gpu_top -J -s 250 To revert back to my existing plugin, just uninstall the test version manually and reinstall from Community Apps. Enjoy!
    2 points
  3. @PlanetDyna Also die Docker Lösung wäre meiner Meinung nach die beste Lösung für dich da du dadurch erheblich felxibler bist als in einer VM. Ja, du kannst mit der unRAID 6.9.0rc2 mehrere Cache Pools anlegen, jedoch nur ein einziges Array. Wie @mgutt schon sagte die Verschlüsselung die du jetzt nutzt, nutzt ja im prinzip nur gegen Diebstahl des Systems, da könntest du auch die Verschlüsselung verwenden die in Nextcloud eingebaut ist. Du kannst natürlich auch die Pools, oder nur einen Pool verschlüsseln nur musst du das gleich von Anfang an für den jeweiligen Pool auswählen, ich bin mir jetzt nicht mehr ganz sicher wie das genau funktioniert aber du musst den Pool den du verschlüsseln willst in BTRFS encrypted formatieren. EDIT: Gefunden, also in etwa... Klick
    2 points
  4. ...Da musst Du nix neu installieren....nur auf den Cache verschieben. So: Docker und VM Dienste unter Settings jeweils disablen. Cache-Disk einbauen und als solche ins Array aufnehmen. das Array starten die Shares "mnt/user/appdata", "/mnt/user/system" und "/mnt/user/domains" bei "Use cache" auf "Prefer" setzen den mover manuell starten/laufen lassen...evtl in den Logs nachschauen ob der mover Files "stehen lässt" (dann hast Du doch noch Prozesse laufen...zurück auf 1.) die o.a. Shares bei "Use cache" auf "only" stellen...wenn Du willst, nochmal den mover laufen lassen. die Dienste für Docker und VMs jeweils wieder starten fertsch....
    2 points
  5. @grtgbln Thanks man. Just watched. Great work. I appreciate you taking the time to do this for our community! Here are the links to the video series. Intro: Part 1: Part 2: Part 3:
    2 points
  6. Hi, may this helps in terms how to use the new function --net=container:Container_Name to use another dockers network, a nice feature to route traffic through a VPN docker when the client docker is not capable to use a proxy. sample usecase i use a VPN Docker container which provides a privoxy vpn or a socks vpn, but i have a docker like xteve which doesnt have the function to route traffic through a http or socks proxy, so when i want to use it through vpn i have to either set the whole mashine behind a vpn or build a docker which includes VPN AND xteve. Now with this feature enabled we can route any docker now through the VPN docker pretty easy. i describe 2 scenarios, 1. all dockers in custom:br0 with their own ip (nice feature which is working properly with host access since 6.8.2 as note) 2. VPN Docker like binhex privoxy, ovpn privoxy, ... on host in bridge mode (port mappings needed) to 1. basic situation before bridged to VPN ovpn_privoxy is my vpn docker connected to my vpn provider and providing as mentioned a http and socks proxy, xteve cant use this features. as mentioned, here my dockers are each on br0 with their own ip, now i ll bridge xteve to use the vpn docker todo so, simply remove the network from xteve and add the following line in this usecase to extra parameters --net=container:ovpn_privoxy now xteve will use the network stack from the vpn container, will look like this xteve docker now doesnt have a own ip anymore and using the container:ovpn_privoxy as network. to reach xteve webui now u enter the ip from ovpn_privoxy and the port from the client app, http://192.168.1.80:34400/web in this usecase, now the xteve external traffic will use the vpn connection from ovpn_privoxy, thats it here thanks to limetech now now when adding another container u can do so, just beware, as there is only one network stack left, its not possible to use apps which uses the same ports, sample here would be, i want a second instance of xteve run through the vpn docker, both listening on 34400, would NOT work, even they resist in their own dockers, the network stack is unique from the ovpn docker here ... so either the 2nd, 3rd, ... app can use different ports (like xteve can be switched to any port) or its just not possible cause ports are unique ... sample with a second working app like above, ovpn_privoxy is the docker providing the network, now for a 2nd "client" docker, to reach the clients now http://192.168.1.80:34400/web <- xteve app http://192.168.1.80:6555 <- emby app of course is the http proxy (port 8080) and socks proxy (port 1080) also still available, has no influence ... i hope this helps howto use the --net.... extra parameter now, to 2. (VPN docker is running on host unraid in bridge mode) only difference is now, u have to add the port mappings to the VPN docker, in this case i would add 34400:34400 and 6555:6555 to the VPN docker would result here in this (my unraid server has the ip 192.168.1.2 thats the only difference when using the VPN docker in bridge mode, now your vpn and apps are all accessed via 192.168.1.2:..... in both usecases there is another nice feature limetech added, as soon the VPN docker gets an update, the "client docker(s)" need to update too which is in the end a restart only to fetch the correct network stack ... u should see a update notification on all dockers relating to the VPN docker as soon that one received an update or u changed something on this docker, if so, please push update or restart the docker(s), shouldnt be too often (depending on update frequency of your VPN docker) in case i can do something better, let me know to correct it.
    1 point
  7. Hello Unraid Community, I made a beginners guide/tutorial to install/setup Unraid (It was using v6.6.6). I cover the following topics: USB Key - 18:00 BIOS - 3:42 Disk Array - 4:56 Parity Check - 10:30 Format Drives - 11:03 Single Share - 11:38 PSA - 21:11 Security - 22:11 Share per Media Type - 28:43 APC UPS - 40:36 10 Gigabits/Second Transfer Test - 43:11 Share Troubleshooting - 44:41 I hope it helps those looking for some initial direction to get started and be inspired to continue their Unraid journey. How to Install & Configure an Unraid NAS - Beginners Guide
    1 point
  8. I believe you answered your own question. Once they have access to the Unraid GUI, they have complete control. You must secure any access with a VPN tunnel or something similar, i.e. teamviewer or other secure remote access through another machine on the LAN
    1 point
  9. geht mir eigentlich auch so in Bezug auf Transcend. Daher denke ich einfach, dass ich mit Samsung gut fahre und die paar Euro bei diesem Gesamtbetrag kommen auch nicht mehr drauf an...
    1 point
  10. You are using the :alpine tag. Switch to latest.
    1 point
  11. ok the fix is now in for spaces in share names, during my testing i also noted the default include extensions should be * not *.*, to ensure files with no extension are also locked (if no include extension specified).
    1 point
  12. I’ve created a 4GB Ramdisk folder for Plex transcoding and it’s working perfectly. It fills up and then Plex clears it out seamlessly. Thanks again for the reply.
    1 point
  13. I'm not going to go to a pay model. I want this to stay a free solution, but in order to do that I need to change things. Take a pause and let me see what I come up with as an answer that will work for all of us.
    1 point
  14. Generally the speed is good at first and is going down with time as the heads read from the outer of the platters to the center. If I look at my example, I start at 190ish MB/s and I end up on the 90ish for an average of 145MB/s.
    1 point
  15. forget it, my fault, didnt save settings vendor intel ... sorry
    1 point
  16. Ich habe mal die B365 Anleitungen von Asus und Gigabyte B365 M AORUS ELITE studiert und die haben alle keine Einschränkungen (bis auf M.2 SATA, aber das ist ja bekannt): Dann wird das ASRock auch keine haben.
    1 point
  17. I recently replaced 2 data drives at once when my last parity check was 7 or more days old. It worked fine. It should be noted that i have 2 parity drives.
    1 point
  18. Dann wäre die Anleitung von ASRock falsch. Teste das erstmal. Ich kann mir das nicht vorstellen.
    1 point
  19. ...warum die gaming Variante?...das "normale" kann auch 2x NVMe-PCIe https://geizhals.de/asrock-b365m-pro4-90-mxb9t0-a0uayz-a1963468.html?hloc=at&hloc=de...ich meine auch, da im hardwareluxx-Forum was zu gelesen zu haben...finde es nur gerade nicht.
    1 point
  20. When you have unraid booted you should be able to mount the NTFS drives using the Unassigned Devices plugin and access the files directly. Then if you wish you can copy the files you want to the Unraid drive using mc at the command line. What you are attempting to do is not a normal setup by any means, and for several reasons you would be better off assigning some random USB stick (not the Unraid boot drive) as disk1 for Unraid, and using the 480GB as a cache disk. The label "cache" is misleading, it's more of a fast access general apps drive, which fits your layout much better. All the tutorials and examples assume you are accessing Unraid from a different machine, so in order to do it all from one tower you are going to have to adapt and overcome.
    1 point
  21. Solche Gehäuse hier sind von chenbro als Hersteller selbst oder z.b. auch von Serveranbiertern ala supermicro, terra server oder thomas krenn, iirc, gebraucht verfügbar. Mit etwas Glück erwischt man auch eins das schon über einen Rack Einbausatz verfügt. Klarer Vorteil: ATX Netzteile die in aktueller Technik problemlos verfügbar und günstig sind. interessant, auch nicht übel, Vor allem die Chenbro 105er Serie ist ziemlich schön wie ich finde und speziell das SR10566 lässt sich mit einem Rackmountkit auch im Schrank befestigen. Auch die von Chenbro angebotenen Auszüge sollten da passen. Edit: Siehe Accessory Schaltfläche auf der Verlinkten Chenbro Seite. Gruß Martin
    1 point
  22. Yes, but it is a tad complex. You have to go into the JSON code of the entire dash directly, find the geo location code, and change the default GPS coordinates. In version 1.6, I am going to try and make this a real panel where this can be done easier. When I get some time, I’ll provide the line numbers you need to change.
    1 point
  23. Sorry for the delay. This worked on mine using GUS just fine.
    1 point
  24. Da hätte ich noch drei Stunden suchen können. Bei der Sanierung neulich wurde scheinbar der linke Port der LAN Dose angeklemmt. Früher war es die Rechte und ich brauchte den Anschluss seit dem nicht. Mit sowas Blöden rechnet auch keiner. Jetzt läuft alles inklusive IOBroker. Nächsten Diestag kommt die Cache SSD an dann wird nochmal IOBroker dort installiert und eingerichtet. Solange kann ich aber schonmal bissel testen. LG Marlon
    1 point
  25. Öhhh...kein Netzwerk. 🙄 OK, ich denke ein Netzwerkkabel hast Du schon eingesteckt und das Ding im BIOS auch nicht disabled, hmmm? -> im BIOS reichen erstmal default/Standard Werte für alles! .....evtl. nur damit rumfummeln, bis der USB Stick als Boot-LW anerkannt wird. Welches MB genau?...Das hier, welches @mgutt oben empfohlen hat? https://geizhals.de/asrock-b365m-pro4-90-mxb9t0-a0uayz-a1963468.html Das hat zwar einen I-219V, aber die Revision sollte mit unraid 6.8.3 laufen. Es sei denn, die haben in einer neueren Charge eine andere, neuere Revision des NIC verbaut 😬 -> log Dich als "root" ein und dann -> was sagt "lspci" ?? Wenn Du da einen "Ethernet controller [xxxx]: Intel Corporation Ethernet Connection (11) I219-V" siehst...mit ner Zahl in den runden Klammern >9 ...bist Du am Irsch. 🤣 -> Probiere es in dem Fall mit der unRaid6.9RC...kannst Du im USB-Creator auswählen.... Wenn Du eine PCIe Netzwerkkarte übrig hast, kannst Du die auch erstmal versuchen. Siehe auch:
    1 point
  26. I see a lot of connections to ProFTP from all over the world, is that expected ? (Seychelles, USA, France, Belgium, UK, Japan, ...)
    1 point
  27. One of the things that's a problem with Unraid users is that the funding campaign will have to be significant to cover all the users outside the Unraid community. I do not expect Unraid users to bear the brunt of the funding and fully fund the efforts. With so many users out there, it shouldn't be a problem. I'm working on it.
    1 point
  28. Kurze Anmerkung. Habe das Fujitsu Board zuhause und habe es ein Jahr lang mit dem Pentium Gold 5400 (jetzt Xeon) betrieben. Funktioniert einwandfrei mit ECC (2x16 Gbyte) Riegel betrieben. (1x Samsung, 1x Kingston). Diesbezüglich steht also nichts entgegen Solltest du ne i3 9... Cpu kaufen. Achte darauf, dass diese eine iGPU hat. Gibt auch eine ohne, welche günstiger sind. i3 hat halt entgegen dem Pentium 4 echte Kerne. Wäre vielleicht bei VM Einsatz besser. Mit der Asus 10gbit Nic hatte ich in meinem Unraid Probleme. Gab immer wieder Netzwerkausfälle. Habe diese dann in meinen Hackintosh eingebaut und in Unraid ne 10gbit Intel mit T550 T1 Chipsatz. Diese funktioniert deutlich besser und läuft Out of the Box mit Unraid. Grüße
    1 point
  29. Have you installed the GPU Dashboard? Try to uninstall this plugin first. nvidia-smi isn't active if you install the Plugin and shouldn't use any CPU power, something must use it.
    1 point
  30. Thanks for sending those. I'd say try these changes and see if you have success: Change your download folder directory. Right now, your completed downloads are 'upstream' of your downloads in progress. I'm not sure if you HAVE to do this, but it might be worth a try. Create a /Downloads/complete directory and keep your current /Downloads/incomplete. Check the box for 'Share Ratio', then keep the bubble checked for 'Pause torrent.' Make the share ratio as low or high as you'd like. Maybe turn it down as low as you can while testing. Also, your downloads may not have been removed because they are set to seed for high share/time ratios. Decrease all the numbers for 'Seeding Rotation' while you are testing, then bump them back up to whatever you'd like.
    1 point
  31. 1 point
  32. So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck. Until this plugin is fixed or whatever is happening. Note the direct download is 6.8.3 though, with ZFS 2.0.0.
    1 point
  33. You need to go into your camera config and change how it records, read the ZM documentation for more details on what does what: Yah, there's a lot going on, I don't pretend to have found it all, mlapi isn't designed to be completely on a remote device just yet, reading between the lines it seems one reason for making the api was so that the ML models could stay in memory so they don't have to be loaded each time, which apparently they have to be loaded on each 'image check' for one reason or another? I'm not a code monkey to catch evverything, only know enough to be dangerous. Never doubted it was a octopus crossed with an ivy tressel of a project, probably never really capable nor intended to run this way. Put it on a resume for those that can begin to understand what all zm ml docker has to do. I'll be honest, I've always played with ML each time I refactored my 'server solutions' as I would get bored/annoyed with current solution, but would inevitably roll back to no ML and just ZM record 24/7, sometimes with this docker, sometimes a vm, sometimes freenas/truenas plugin (don't start me on that one). Lets be honest a cctv system is 'only good' if it A) records everything, and B) never misses anything and C) that in a pinch you could go back and find if you had to in case of breakin/theft etc. Everything else is sex appeal and gravy, as fantastic and valuable as it can be, and I can't fault anyone buying/building a race car when you only need a honda/toyota to get around . This last 'install' for me was probably the first time I had the confluence of knowledge, understanding, documentation, examples, code state, luck, and so on to get it all working and work in a way that wouldn't be an annoyance with zmninja/etc. And then it shutters (which I do not fault). But I probably owe you a few pints for the entertainment value, you're right. Again, the only real annoyance for me being told 'you're on your own' in the end is that docker, right now, is probably the 'easiest' way to share gpu power (caveat emptor) amongst home server tasks (plex,ml,tinkering,etc) but not as easy as spooling a VM from the development/deployment standpoint. I'll have to investigate this. I remember early days docker (for me) not understanding how docker is designed to be immutable, messing around inside docker, updating/upgrading it, and then wondering 'where did all the stuff I did go?' If this ends up being a solution one will have to be VERY careful not to accidentally kill this docker with an update/upgrade etc.
    1 point
  34. Key in the config folder is linked to the flash ID embedded in hardware, so as long as you use the same physical USB stick, the license key will work. Doesn't matter if you reformat.
    1 point
  35. Thanks! 1. Tried to just replace the "bzroot-gui" but that didn't help. Same error. 2. Formatted the USB stick, 3. Downloaded the latest version on unraid 4. Copied to the USB stick 5. Replaced the config folder, 6. Run file "make_bootable.bat" 7. Server up and running again! 😃 Didn't need to do anything with the key(?)
    1 point
  36. Yes.... I guess thats the answer to your question I got APC ups recently and turned on with the USB and boom connected and worked, just had to change the shutdown times etc but yes, its that easy if its APC and supports the drivers.
    1 point
  37. I really didn't expect drama like this coming out of UnRAID, I have to say. Sympathize with both sides of this particular argument, but I do hope the impacted developers see the olive branch being offered as genuine and come back to the fold. We'll all be stronger for it.
    1 point
  38. I think it is that easy. I think i am using that UPS, it just works , I set it up years ago.
    1 point
  39. You can always review the support page to see what issues people are having with a particular firmware. Here is the link. https://community.ui.com/releases
    1 point
  40. Most likely, try backing up the config folder and then re-create the flash, if it fails transfer the key.
    1 point
  41. [Possible fix for Big Sur and samba (smb) transfer hangs -- kernel panic and/or error -8084] If you are experiencing issue with Big Sur and samba (smb) transfers here is a possible fix that is working for me. Change the size of the buffer the kernel allocates to hold the data with SO_RCVBUF and SO_SNDBUF to a size of 65536 bytes, Add this line to the [global] configuration of your samba server: socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536 This fixed file transfers hang for me, still have some issues to run a virtual machine with vmware through smb, the whole mac os crashes (it didn't happen with Catalina): nfs is still good for this. UPDATE: bjornatic seems to have found that the solution is switching to virtio-net network model (I confirm it's working).
    1 point
  42. Quote from the GitHub/docker hub readme-
    1 point
  43. @Schmeckles23 Unraid 6.9 will support the sensors for the newer Ryzen chips. You can already use one of the BETA builds, wait for the stable version or fiddle around with a custom kernel. It's up on you 😄
    1 point
  44. On the Docker config click advanced, then change the WebUI line http://[IP]:[PORT:8888]/
    1 point
  45. ** I think I found a solution to fix this @Kewjoe ** I was having this exact same issue: Set up the official OnlyOfficeDocumentServer docker container with a reverse proxy to port 4430 (also with a secret key variable set as JWT_SECRET). If I went to https://documentserver.mydomain.com I would get the green checkmark and welcome page. If I added https://documentserver.mydomain.com and the secret key I created to the OnlyOffice settings in Nextcloud these settings were accepted and saved. Then finally, if I tried to open an EXISTING .doc or .docx document I would just get a blank white screen under the Nextcloud header bar. The same thing would happen if I created a new document. My assumption was that perhaps some setting in my general NGINX ssl.conf was preventing the frame from opening OnlyOffice Documentserver. So, I went to the Github page for the Linuxserver LetsEncrypt container and pulled the most recent code from the ssl.conf (located here: https://github.com/linuxserver/docker-letsencrypt/blob/master/root/defaults/ssl.conf) and updated the ssl.conf in my Letsencrypt appdata folder. After restarting Letsencrypt and Nextcloud everything is now working! Hopefully that helps you as well. My only concern is that I now get an A at SSLLabs, rather than an A+ like I used to get with my previous, more complicated ssl.conf...
    1 point
  46. No problem. Sometimes when you're so into how unRAID all works, it's easy to forget that a "simple" thing isn't so simple to someone just starting out. Keep at it. It's so worth it.
    1 point
  47. Go to the Docker tab. Find the Binhex-Krusader in the Dockers list. Click on it (see picture below) and select "Edit". The Binhex-Krusader Docker edit screen pops up. Go down the list until you see the /mnt/disks/ for the container path for unassigned. Click the "Edit" button to the right of it. The details pop-up for the unassigned devices paths. Go down the list to "Access Mode" and change that to "RW/Slave" and then save.
    1 point
  48. After that, upload it to github, make an announcement thread and let me know to add it to the repositories thread and to CA
    1 point