Leaderboard

Popular Content

Showing content with the highest reputation on 05/06/21 in all areas

  1. This is resolved. It was an issue with RAM. The new memory is in and the server is running smooth and happy again.
    2 points
  2. License is only validated at array start, so as long as you don't stop the array you should be fine to wait until normal USA west coast business hours. @SpencerJ
    2 points
  3. You should be able to just put the RSS URL in your podcast app. Works for me. https://feeds.buzzsprout.com/1746902.rss
    2 points
  4. Did you make sure the Docker and VM services were disabled while you wanted the System share moved? The services will hold files open thus stopping mover from taking action on them.
    1 point
  5. I believe the issue is that as of 6.9.2 and earlier the mover only operates between a pool and the main array. So, to move from one pool to another, you will need to set cache:yes on the share, make sure there are no open files, and run the mover. Then once all the files are on the main array, you can set cache:prefer to the pool you want to use and once again run the mover. Alternatively you could move the files manually using mc or something like that.
    1 point
  6. Ah yes you have found one of the classic stumbling blocks for new docker developers. This isnt an unRAID issue rather a non-intuitive specific of the Docker volume/bind mount system. You understood correctly however there are caveats that are not obvious. The VOLUME directive in a dockerfile tells docker that a volume needs to be mounted at the provided path. When the container is run if no volume flag is specified (-v) docker will automatically attach an anonymous volume to that path, alternatively you can use the -v flag to attach a named volume to your container. These volumes are m
    1 point
  7. Nein, dass Problem wird damit nicht behoben, aber das Pinning betrifft alle Systeme.. egal wieviele Kerne. Je nach usecase der vm mag es aber sein, dass es dem ein oder anderen nicht auffällt..
    1 point
  8. Having the same issue with getting mt to work. I have it in my json, and i tried passing through an mt variable on the docker template. Using a P2000.
    1 point
  9. @Reapoxbei 6.9 ist das Plugin nicht mehr nötig/funktioniert nicht mehr:
    1 point
  10. Also, die unRaid-Parity gibt es nur beim Array und das gibt es nur einmal in einem unRaid Server. Cache ist keine DIsk, sondern ein Pool und wird nicht mit dem Array oder darin enhaltene Disks, sondern auf Shares verknüpft. Seit 6.9 kannst Du mehr als einen Pool haben, der neben dem Array existiert. Diese Pools haben, wie beim Cache, BTRFS und Du kannst Redundanz zB durch eine Raid1 oder Raid5 Konfiguration pro Pool erreichen. Jeder Pool kann als individueller Cache-Pool für ein anderes Share fungieren. Siehe: https://wiki.unraid.net/Manual/Storage_Manage
    1 point
  11. I would like unraid to ask for a confirmation, before stopping a VM. I just accidentally clicked on "stop" instead of "VNC Remote" and killed a process that was about halfway done and takes two weeks to complete.
    1 point
  12. well i appreciate your efforts, lets see the outcome in the next release ! Thank u for stepping in and try to help
    1 point
  13. Even if you use cache: yes, the files will be moved to the array. What exactly is it that is written in your /data mount that you mean should go in /config? The more info you provide, the easier it is to help. And also post the docker run command.
    1 point
  14. It's how the system works. If you pull an app named "Plex", the template is called "my-plex" If you pull another app named "Plex", it overwrites the existing template. If it was Plex vs PlexMediaServer then there's no problems But, the important stuff is all within the appdata share, so just install the app you want, set it up again with the appropriate paths etc and you should be good to go.
    1 point
  15. Bingo! Mach das mal aus der mariadb Console "mysqld --tc-heuristic-recover rollback"
    1 point
  16. hi, Thanks for your feedback Yes I did install the Unraid Nvidia Plugin and it is functionnal with Plex for instance. Please let me know about any information I could help you with
    1 point
  17. Geht alles. Die Frage ist ob du die M.2 SSD als solches an die VM durchschleifen möchtest oder auf der M.2 ein virtuelles Disk Image erstellen möchtest. Die externe HDD wiederum ist gar kein Problem. Einfach als USB Gerät durchschleifen. Oder aber du greifst über eine Netzwerk-Freigabe darauf zu. Ich denke dieses Video könnte dich interessieren:
    1 point
  18. Here you go! https://easyupload.io/zyte8s
    1 point
  19. Hi there, welcome to the forum. First thing to mention is -- good coverage of information, background, issue description, and diagnostics. That's a fantastic start. I haven't dug through every line -- I inspected only the things that I think could be relevant -- but I don't see any hard failures anywhere. That's another good thing. Unfortunately in this case, it also seems to mean that the diagnostics don't contain any log data that could be specifically helpful in tracking this one down. In your information, the only omitted thing that might help chase down
    1 point
  20. Yes, i can't remember why i had the array on stop (was doing something) and had to reboot, made sure i set the array to not start automatically, and when that happened i noticed i had 2 cores at 100%, currently it's at 1 core, and if i do reboot, it chooses a different core on reboot.
    1 point
  21. Guten Morgen, ich hatte genau das gleiche Problem. Ich nutze jetzt paperless ng. Schau mal hier in den Community Applications https://github.com/jonaswinkler/paperless-ng Alles was jetzt reinkommt wird gescannt automatisch von paperless erfasst und ist sofort über Suchfunktion auffindbar. Email Integration alles möglich. Nichts mehr suchen müssen 👍 Beste Grüße und viel Erfolg Averall
    1 point
  22. Yeah, except the line HAS to be in share.cfg for mover to work. Hence my "any change in Global Share Settings" should also fix it up. It's probably because of v4 was the starting point for this flash device, and the setting never got enabled over the years.
    1 point
  23. It will be available on Apple/Overcast soon. Currently available on Spotify/Stitcher/Amazon.
    1 point
  24. Yes. The 2 parity drives have no connection, totally separate math equations are used. The only rule is no data drive can be larger than either parity slot. The data is continuously available through the rebuild process, you don't have to wait during a rebuild. The array is fully available during parity checks, but some systems don't have enough resources to keep playback seamless during checks. Any access is shared, so using the array while things are processing will slow down all the things.
    1 point
  25. The moment you do New Config you invalidate parity and thus have to rebuild parity from the remaining disks. On that basis you might as well immediately after the New Config put all the disks into the configuration you want to end up with and then build parity based on that configuration.
    1 point
  26. Ich bin, kurz nachdem ich zu unRaid kam, auf den Kanal von TGF aufmerksam geworden. Wohl dem YouTube-Algorithmus geschuldet. 😉 Meiner Meinung nach machen die Videos auf unRaid und seine Funktionen aufmerksam/neugierig. Die fundierten Infos dazu habe ich mir dann anderweitig besorgt. Ich denke aber, das genau das beabsichtigt ist. Die Videos sollen unterhalten. Und das tun sie.
    1 point
  27. Nein, das meine ich nicht. Ich meine das hier: Die Seite erreichst du, indem du auf dem Dashboard bei der CPU auf das Zahnrad klickst. In meinem Beispiel sind die letzten 12c/24t von Unraid isoliert, d.h. Unraid darf diese nicht nutzen. Dadurch vermeidet man, dass die vms kurzzeitig einfrieren oder ruckeln o.ä. Steigert auch die Performance der vm z.T. enorm, variiert aber... Auf der gleichen Seite siehst du oben das Pinning der vms: Ich empfehle grundsätzlich nicht, Pinning für die vms hier auf dieser Seite vo
    1 point
  28. Personally I do mean personally. I think Podcasts should be a mixed bag of topics. Possibly reviews of Plugins, dockers, future content on and on. I think if you stick with one particular subject you might entertain some and bore others.
    1 point
  29. No they are automatically installed at boot from this dir.
    1 point
  30. 1 point
  31. Ok I get it working if anybody else has this issues this is my config I took out my IP of my server but you just imput you server local IP and this should work if you using the sawg docker to access this remote server { listen 443; server_name remotely.*; location / { proxy_pass http://xxx.xxx.xxx.xxx:9280; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; p
    1 point
  32. Do you have any non-HP PCIe cards installed, e.g. HBA? I had this same problem with a DL380 G6 (essentially the same server); worked fine with HP cards (although fans would understandably ramp up as more cards were installed) but if you put any non-HP card (Dell H310 in my case), the fans went to full speed - even if that was the only card installed.
    1 point
  33. I have been going the same route and had some problems. Did you configure these ports in the binhex-delugevpn container config? I missed this and was struggling until I had read and fully understood Q24-27 in binhex exelent guide: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  34. You just want the 9211 to be flash to IT mode
    1 point
  35. We will be making a bigger announcement soon, but for now 🔊
    1 point
  36. Manche ergänzen ja PiHole um Unbound um Domains direkt über den Registrar aufzulösen. Das mag zwar sicher sein, aber die Performance von 100 von 500ms je nach Domain und das alle 24 Stunden, wäre für mich inakzeptabel. Ich frage mich nun ob es auch eine Art "Doppel DNS Check" gibt. Und zwar soll mein lokaler DNS Server per Zufall zwei aus einer von mir erstellten Liste von Domainservern auswählen und die Domain von beiden auflösen lassen und nur wenn es Unterschiede gibt, soll ein dritter angefragt werden. Auf die Art würde meine Privatsphäre verbessert, da nur ein klei
    1 point
  37. Truth. I have yet to see a server type system "feel" faster with that sort of overclocking. Synthetic benchmarks may show small improvements, but nothing that actually effects real world loads significantly. I HAVE seen timing issues with XMP cause micro stutters and brief freezing even if it didn't outright crash.
    1 point
  38. What does your system devices on tools tab look like? GPU should be ticked, this is my K4000.
    1 point
  39. Because of a weird speed issue where I now need to spin up my array before running parity checks etc. and my personal opinions about php, I had to take up the bash challenge (this might even be sh compatible) Only did the spin them all up one though. #!/bin/bash . /usr/local/emhttp/state/var.ini curl --unix-socket /var/run/emhttpd.socket \ --data-urlencode cmdSpinupAll=apply \ --data-urlencode startState=$mdState \ --data-urlencode csrf_token=$csrf_token \ http://127.0.0.1/update
    1 point
  40. EDIT: There is a workaround I found on the GIT repository. Basically the author of the speedtest docker needs to rebuild it. Until then, you can rebuild it yourself and point it to your own local repository. See this link for instructions. Same issue has the others have reported regarding the SpeedTest docker. Seems one of the parameters no longer allows NULL values? Debug Logging Output: Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-19 16:00:09,787 - DEBUG: Testing connection to InfluxDb using provided crede
    1 point
  41. 1 point
  42. I have the same issue since a few days : Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-10 08:57:21,453 - INFO: Starting Speed Test For Server None Traceback (most recent call last): File "/src/influxspeedtest.py", line 8, in <module> collector.run() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 171, in run self.run_speed_test() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 119, in run_speed_test self.setup_speedtest(server) File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 71, in setup_speedtest self.speedtest = speedtes
    1 point
  43. Anyone know where to find instructions on how to get this working with the ASRock motherboards? I noticed that it says there needs to be a kernel patch and that is above my knowledge 😄 so if anyone has an easy idiots guide to get this working please let me know. I'd like to get the RGBs on my memory modules, and CPU fan working right.
    1 point
  44. From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.x Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).) 0. install redis from apps jj9987's Repository, no config needed. 1. Install CA User Scripts plugin Create a new script named vm.max_map_count navigate to \flash\config\plugins\user
    1 point
  45. Well, after a lot of googling I can answer my own question. For me, this did the trick: To my domain tag, add xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0' so it looks like this: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> And in the end om the domain tag (in the bottom of the xml), add this: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=Microsoft'/> </qemu:commandline> After adding this I get
    1 point
  46. Yeah the secret sauce here is that not only do you need to change the host side port but you also have to change the container side port too to match. so this is a bit more tricky with the unraid web ui, as unraid rightly stops users doing this as you normally don't want to change the container port side but for this particular container we have to. So in order to get around this you need to edit the container and then use the link down the bottom shown as "Add another Path, Port, Variable, Label or Device", select "config type" as "port" then put in the port number you want for bo
    1 point
  47. Might be to you, but I use them quite a bit. So, not useless. Ditto ... these are definitely useful.
    1 point
  48. Might be to you, but I use them quite a bit. So, not useless.
    1 point