Leaderboard

Popular Content

Showing content with the highest reputation on 05/06/21 in all areas

  1. This is resolved. It was an issue with RAM. The new memory is in and the server is running smooth and happy again.
    2 points
  2. License is only validated at array start, so as long as you don't stop the array you should be fine to wait until normal USA west coast business hours. @SpencerJ
    2 points
  3. You should be able to just put the RSS URL in your podcast app. Works for me. https://feeds.buzzsprout.com/1746902.rss
    2 points
  4. I believe the issue is that as of 6.9.2 and earlier the mover only operates between a pool and the main array. So, to move from one pool to another, you will need to set cache:yes on the share, make sure there are no open files, and run the mover. Then once all the files are on the main array, you can set cache:prefer to the pool you want to use and once again run the mover. Alternatively you could move the files manually using mc or something like that.
    1 point
  5. Yep, on downgrading the firmware to 20.00.07.00 the disks are now showing up correctly in 6.9.2. Super weird, the .11 firmware is what it shipped from the reseller with. Regardless, I really appreciate your help, and I'll go ahead and mark resolved.
    1 point
  6. Ah yes you have found one of the classic stumbling blocks for new docker developers. This isnt an unRAID issue rather a non-intuitive specific of the Docker volume/bind mount system. You understood correctly however there are caveats that are not obvious. The VOLUME directive in a dockerfile tells docker that a volume needs to be mounted at the provided path. When the container is run if no volume flag is specified (-v) docker will automatically attach an anonymous volume to that path, alternatively you can use the -v flag to attach a named volume to your container. These volumes are managed by docker, stored somewhere in dockers system files (which on unRAID is in the docker image) and aren't easily accessible from outside. As the documentation suggests you can copy files to those volumes in your docker file and they will be placed into the volume at runtime. The caveat comes when we start talking about bind-mounts (which we typically use in unRAID). As an alternative to docker volumes you can use the -v flag to tell docker to mount a host directory in place of a volume (or really in place of any directory in a container). This is what you are doing when you use the path opton in unRAID's docker template. The issue you are seeing is that the files you added to the volume at build time are not made available when a folder is bind mounted into the container over that directory, this is just how docker does things. If you were to remove the path mapping in your unraid docker template and then try your test you would be able to see the yaml file which would be saved in an anonymous volume. As to how to achieve your original goal of pre-populating the config folder for your app, your best bet is to add those files to a different folder in your dockerfile and have a startup script in your container copy them over to your config volume on init if they are not already there.
    1 point
  7. Nein, dass Problem wird damit nicht behoben, aber das Pinning betrifft alle Systeme.. egal wieviele Kerne. Je nach usecase der vm mag es aber sein, dass es dem ein oder anderen nicht auffällt..
    1 point
  8. @Reapoxbei 6.9 ist das Plugin nicht mehr nötig/funktioniert nicht mehr:
    1 point
  9. Also, die unRaid-Parity gibt es nur beim Array und das gibt es nur einmal in einem unRaid Server. Cache ist keine DIsk, sondern ein Pool und wird nicht mit dem Array oder darin enhaltene Disks, sondern auf Shares verknüpft. Seit 6.9 kannst Du mehr als einen Pool haben, der neben dem Array existiert. Diese Pools haben, wie beim Cache, BTRFS und Du kannst Redundanz zB durch eine Raid1 oder Raid5 Konfiguration pro Pool erreichen. Jeder Pool kann als individueller Cache-Pool für ein anderes Share fungieren. Siehe: https://wiki.unraid.net/Manual/Storage_Management#Multiple_Pools Edit: Also zum "wie": Du erstellst ein Array aus Deinen HDDs inkl. Parity. Dann erstellst Du (die SMB-)Shares, weisst jedem Share aber zB nur eine Untermenge der HDDs aus dem Array zu. Dann erstellst Du aus NVMe/SSD so viele Named-(Cache-)Pools, wie Du brauchst und weist diese den gewünschten Shares zu. Damit sollte man sowas "bauen" können, was Du willst.
    1 point
  10. Will do, currently in holiday home and not at home.
    1 point
  11. I would like unraid to ask for a confirmation, before stopping a VM. I just accidentally clicked on "stop" instead of "VNC Remote" and killed a process that was about halfway done and takes two weeks to complete.
    1 point
  12. well i appreciate your efforts, lets see the outcome in the next release ! Thank u for stepping in and try to help
    1 point
  13. Even if you use cache: yes, the files will be moved to the array. What exactly is it that is written in your /data mount that you mean should go in /config? The more info you provide, the easier it is to help. And also post the docker run command.
    1 point
  14. It's how the system works. If you pull an app named "Plex", the template is called "my-plex" If you pull another app named "Plex", it overwrites the existing template. If it was Plex vs PlexMediaServer then there's no problems But, the important stuff is all within the appdata share, so just install the app you want, set it up again with the appropriate paths etc and you should be good to go.
    1 point
  15. hi, Thanks for your feedback Yes I did install the Unraid Nvidia Plugin and it is functionnal with Plex for instance. Please let me know about any information I could help you with
    1 point
  16. Nein, aber laut ein paar Syno Nutzern soll mein Script auch da gehen. Dann eben über den Aufgabenplaner und mit Pfaden der Syno. Den Unraid SMB Share kannst du ja meine ich über den Finder einbinden. Probier doch durch und sag bescheid wo es hängt.
    1 point
  17. Geht alles. Die Frage ist ob du die M.2 SSD als solches an die VM durchschleifen möchtest oder auf der M.2 ein virtuelles Disk Image erstellen möchtest. Die externe HDD wiederum ist gar kein Problem. Einfach als USB Gerät durchschleifen. Oder aber du greifst über eine Netzwerk-Freigabe darauf zu. Ich denke dieses Video könnte dich interessieren:
    1 point
  18. Ich verwende docspell: https://docspell.org/ und https://github.com/eikek/docspell Da es keine Unraid Templates dafür gibt, habe ich (mir) welche erstellt inkl. Installationsanleitung: https://github.com/vakilando/unraid-docker-templates Es gibt einen (kleinen) docspell Thread im Forum: ...und wenn du das Gefühl hast im obigen docspell-thread fehlt was....hier ist der Anfang: Ich möchte docspell nicht mehr missen.... Besonders die Anzahl der unterstützen Dateitypen (pdf, doc/x, xls/x, ods, odt, eml, rtf, html) haben mich überzeugt
    1 point
  19. Here you go! https://easyupload.io/zyte8s
    1 point
  20. Hi there, welcome to the forum. First thing to mention is -- good coverage of information, background, issue description, and diagnostics. That's a fantastic start. I haven't dug through every line -- I inspected only the things that I think could be relevant -- but I don't see any hard failures anywhere. That's another good thing. Unfortunately in this case, it also seems to mean that the diagnostics don't contain any log data that could be specifically helpful in tracking this one down. In your information, the only omitted thing that might help chase down a potential source would be share-related. I feel it's safe to assume you're probably using Samba (SMB / Windows client) shares, but if you're not please clarify. With any share type, we would probably need to see log files from the samba (or whichever share service you use) service -- which does expose filenames very broadly to anyone on the forum, which complicates matters if there may be sensitive contents. Perhaps a self-scan could come up with some error messages you could paste selectively, if nothing else. My first impulse is to check the forum specifically for the Permissions plugin you're using, likely the Fix Common Problems plugin, as well as any logs provided by that plugin itself. If it has issues setting permissions, it may help to see where and why.
    1 point
  21. Yes, i can't remember why i had the array on stop (was doing something) and had to reboot, made sure i set the array to not start automatically, and when that happened i noticed i had 2 cores at 100%, currently it's at 1 core, and if i do reboot, it chooses a different core on reboot.
    1 point
  22. Guten Morgen, ich hatte genau das gleiche Problem. Ich nutze jetzt paperless ng. Schau mal hier in den Community Applications https://github.com/jonaswinkler/paperless-ng Alles was jetzt reinkommt wird gescannt automatisch von paperless erfasst und ist sofort über Suchfunktion auffindbar. Email Integration alles möglich. Nichts mehr suchen müssen 👍 Beste Grüße und viel Erfolg Averall
    1 point
  23. It will be available on Apple/Overcast soon. Currently available on Spotify/Stitcher/Amazon.
    1 point
  24. Just would like to be able to listen on Downcast or anything but Spotify
    1 point
  25. Yes. The 2 parity drives have no connection, totally separate math equations are used. The only rule is no data drive can be larger than either parity slot. The data is continuously available through the rebuild process, you don't have to wait during a rebuild. The array is fully available during parity checks, but some systems don't have enough resources to keep playback seamless during checks. Any access is shared, so using the array while things are processing will slow down all the things.
    1 point
  26. The moment you do New Config you invalidate parity and thus have to rebuild parity from the remaining disks. On that basis you might as well immediately after the New Config put all the disks into the configuration you want to end up with and then build parity based on that configuration.
    1 point
  27. Ich bin, kurz nachdem ich zu unRaid kam, auf den Kanal von TGF aufmerksam geworden. Wohl dem YouTube-Algorithmus geschuldet. 😉 Meiner Meinung nach machen die Videos auf unRaid und seine Funktionen aufmerksam/neugierig. Die fundierten Infos dazu habe ich mir dann anderweitig besorgt. Ich denke aber, das genau das beabsichtigt ist. Die Videos sollen unterhalten. Und das tun sie.
    1 point
  28. Nein, das meine ich nicht. Ich meine das hier: Die Seite erreichst du, indem du auf dem Dashboard bei der CPU auf das Zahnrad klickst. In meinem Beispiel sind die letzten 12c/24t von Unraid isoliert, d.h. Unraid darf diese nicht nutzen. Dadurch vermeidet man, dass die vms kurzzeitig einfrieren oder ruckeln o.ä. Steigert auch die Performance der vm z.T. enorm, variiert aber... Auf der gleichen Seite siehst du oben das Pinning der vms: Ich empfehle grundsätzlich nicht, Pinning für die vms hier auf dieser Seite vorzunehmen, da Einstellungen der vm verloren gehen können. Stattdessen über die xml editieren. Hier mein Beispiel: <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='24'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='25'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='26'/> <vcpupin vcpu='6' cpuset='11'/> <vcpupin vcpu='7' cpuset='27'/> <vcpupin vcpu='8' cpuset='12'/> <vcpupin vcpu='9' cpuset='28'/> <vcpupin vcpu='10' cpuset='13'/> <vcpupin vcpu='11' cpuset='29'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='30'/> <vcpupin vcpu='14' cpuset='15'/> <vcpupin vcpu='15' cpuset='31'/> </cputune>
    1 point
  29. Personally I do mean personally. I think Podcasts should be a mixed bag of topics. Possibly reviews of Plugins, dockers, future content on and on. I think if you stick with one particular subject you might entertain some and bore others.
    1 point
  30. No they are automatically installed at boot from this dir.
    1 point
  31. Ok I get it working if anybody else has this issues this is my config I took out my IP of my server but you just imput you server local IP and this should work if you using the sawg docker to access this remote server { listen 443; server_name remotely.*; location / { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /_blazor { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /AgentHub { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /ViewerHub { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /CasterHub { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
    1 point
  32. I have been going the same route and had some problems. Did you configure these ports in the binhex-delugevpn container config? I missed this and was struggling until I had read and fully understood Q24-27 in binhex exelent guide: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  33. Wie immer geb ich (Alex) mein Bestes. Geschmäcker sind eben verschieden und mir ist bewusst, dass meine Art viele nicht abkönnen. Trotzdem schätze ich das Feedback! Vielleicht ist ja doch was für den ein oder Anderen dabei, falls nicht gibts ja zum Glück noch SpaceInvaderOne, dann muss man sich mein Gehampel nicht reinziehen. 😀
    1 point
  34. You just want the 9211 to be flash to IT mode
    1 point
  35. We will be making a bigger announcement soon, but for now 🔊
    1 point
  36. OK, just wanted to make sure since some users expect replacing cables/controller fixes a disable disks. First diags show a successful rebuild, second ones show a rebuild in progress, there are some ATA errors though, you say it's always the same disk that gets disable, even if you connect it to a different controller?
    1 point
  37. Manche ergänzen ja PiHole um Unbound um Domains direkt über den Registrar aufzulösen. Das mag zwar sicher sein, aber die Performance von 100 von 500ms je nach Domain und das alle 24 Stunden, wäre für mich inakzeptabel. Ich frage mich nun ob es auch eine Art "Doppel DNS Check" gibt. Und zwar soll mein lokaler DNS Server per Zufall zwei aus einer von mir erstellten Liste von Domainservern auswählen und die Domain von beiden auflösen lassen und nur wenn es Unterschiede gibt, soll ein dritter angefragt werden. Auf die Art würde meine Privatsphäre verbessert, da nur ein kleiner Teil der Anfragen bei einem DNS Server landen und bei Multi-Threading gäbe es nicht mal einen Geschwindigkeitsnachteil. Außerdem würde ich mir wünschen, dass der DNS Server automatisch bei Ablauf der TTL alle Domains abfragt, die in der letzten Woche aufgerufen wurden. Auf die Art soll der Zeitpunkt meines Aufrufs und der Auflösung verzerrt werden. Auch kämen dann umso mehr Auflösungen direkt aus dem lokalen Cache. Schon mal jemand von so einem Konzept gehört?
    1 point
  38. Truth. I have yet to see a server type system "feel" faster with that sort of overclocking. Synthetic benchmarks may show small improvements, but nothing that actually effects real world loads significantly. I HAVE seen timing issues with XMP cause micro stutters and brief freezing even if it didn't outright crash.
    1 point
  39. What does your system devices on tools tab look like? GPU should be ticked, this is my K4000.
    1 point
  40. EDIT: There is a workaround I found on the GIT repository. Basically the author of the speedtest docker needs to rebuild it. Until then, you can rebuild it yourself and point it to your own local repository. See this link for instructions. Same issue has the others have reported regarding the SpeedTest docker. Seems one of the parameters no longer allows NULL values? Debug Logging Output: Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-19 16:00:09,787 - DEBUG: Testing connection to InfluxDb using provided credentials 2021-04-19 16:00:09,789 - DEBUG: Successful connection to InfluxDb 2021-04-19 16:00:09,789 - INFO: Starting Speed Test For Server None 2021-04-19 16:00:09,797 - DEBUG: Setting up SpeedTest.net client Traceback (most recent call last): File "/src/influxspeedtest.py", line 8, in <module> collector.run() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 171, in run self.run_speed_test() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 119, in run_speed_test self.setup_speedtest(server) File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 71, in setup_speedtest self.speedtest = speedtest.Speedtest() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1091, in __init__ self.get_config() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1174, in get_config map(int, server_config['ignoreids'].split(',')) ValueError: invalid literal for int() with base 10: ''
    1 point
  41. 1 point
  42. I have the same issue since a few days : Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-10 08:57:21,453 - INFO: Starting Speed Test For Server None Traceback (most recent call last): File "/src/influxspeedtest.py", line 8, in <module> collector.run() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 171, in run self.run_speed_test() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 119, in run_speed_test self.setup_speedtest(server) File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 71, in setup_speedtest self.speedtest = speedtest.Speedtest() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1091, in __init__ self.get_config() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1174, in get_config map(int, server_config['ignoreids'].split(',')) ValueError: invalid literal for int() with base 10: ''
    1 point
  43. Anyone know where to find instructions on how to get this working with the ASRock motherboards? I noticed that it says there needs to be a kernel patch and that is above my knowledge 😄 so if anyone has an easy idiots guide to get this working please let me know. I'd like to get the RGBs on my memory modules, and CPU fan working right.
    1 point
  44. From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.x Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).) 0. install redis from apps jj9987's Repository, no config needed. 1. Install CA User Scripts plugin Create a new script named vm.max_map_count navigate to \flash\config\plugins\user.scripts\scripts\vm.max_map_count open 'description' file write some readable description about what this script does. open 'script' file, contents of script as follows: #!/bin/bash sysctl -w vm.max_map_count=262144 Set script schedule to At Startup of Array Run the script once. Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field Click on the "Save" button Click back to "Docker" tab and then click on the "Add Container" button Click on the "Template" dropdown menu and select the Elasticsearch5 image Use pre config, no change needed. 2. go to apps, find diskover, click install put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost ES_USER : elastic ES_PASS : changeme change appdata path to /mnt/cache/appdata/diskover/ data path I use /mnt/user/ which is going to index everything from the user webgui port I changed to 8081 because I have qBittorrent on 8080 add a new variable, DISKOVER_AUTH_TOKEN value is from https://github.com/shirosaidev/diskover/wiki/Auth-token click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching. It might take half a minute for the 1st indice to appear. For the whole process, you do not seem to need to change any folder/file permissions. One problem I got is, while the file index goes to 94.5% it stuck there for hours. So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. But this also means this setup could have problem like stuck indexing sometime. The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. Or running from docker caused this. OFark's docker image is a preconfiged working one. If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover
    1 point
  45. Well, after a lot of googling I can answer my own question. For me, this did the trick: To my domain tag, add xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0' so it looks like this: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> And in the end om the domain tag (in the bottom of the xml), add this: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=Microsoft'/> </qemu:commandline> After adding this I get the following results in AS SSD: Very close to bare metal performance!! And my vm is blazing fast! Can't tell I'm running in a vm or bare metal! I get a warning in the logs: This family of AMD CPU does not support hyperthreading(2) but does not seem to have an effect, I have also tweaked the following tag: <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='yes'/> </clock> where i changed hpet to yes to get the cpu down a bit in idle mode. Hope this can help anyone else that have performance issues in a vm and where the vm feels slow and sluggish. Like night and day here now.
    1 point
  46. Yeah the secret sauce here is that not only do you need to change the host side port but you also have to change the container side port too to match. so this is a bit more tricky with the unraid web ui, as unraid rightly stops users doing this as you normally don't want to change the container port side but for this particular container we have to. So in order to get around this you need to edit the container and then use the link down the bottom shown as "Add another Path, Port, Variable, Label or Device", select "config type" as "port" then put in the port number you want for both the "container port" and "host port" and then click "add". So now you have added the additional port you can now remove the old port, as this will now clash on the host port if you have already altered it. Remember - Make sure to set the webui_port to the same value and then click on "apply" button to apply the changes. Finally open up your favourite browser and point it at the http://<host ip address>:<port of your choice>
    1 point
  47. Might be to you, but I use them quite a bit. So, not useless. Ditto ... these are definitely useful.
    1 point
  48. Might be to you, but I use them quite a bit. So, not useless.
    1 point