Leaderboard

Popular Content

Showing content with the highest reputation on 11/29/22 in all areas

  1. To slightly clarify wgstarks' point: you're not going to restore from backup. Your files, settings and so on will stay exactly where they are and a fresh copy of the operating system will be installed around them. If you wish, you can choose to wipe your whole disk (and thus need to restore from backups), but you'd need to actively select that option.
    2 points
  2. Hey folks, you'll want to head over to the Plugins tab and check for updates, My Servers plugin version 2022.11.29.0742 is now available. We've made another round of stability improvements to the My Servers Client as well as to Flash Backup. Speaking of flash backup, we are running a regular process to delete old cloud backups from our server. So far we are being cautious so as not to delete something that you might still want, but the Settings -> My Servers page will now show you if you have files on our server and give you the option to remove them yourself (assuming you have disabled flash backup) before our automated systems do. Troubleshooting flash backup: * We tweaked our firewall detection routine, there should be no more false positives with "Unable to connect to backup.unraid.net:22", if you still see that message then you need to investigate what firewall is preventing your server from making an ssh connection to backup.unraid.net on port 22. For instance, Ubiquity flags this communication as an "Attempted Information Leak" and you'll want to ignore that. * If you see "DNS is unable to resolve backup.unraid.net" then go to Settings -> Network Settings and set your DNS server to a non-blocking DNS provider like 8.8.8.8 or 1.1.1.1 * If you see a "failed to sync" error message, or anything saying "error" or "fatal", choose the "Deactivate" option (and delete from remote) then "Activate" to start fresh. ## 2022.11.29.0742 ### This version resolves: - several corner cases that could have prevent a successful install - no array size shown on dashboard after a reboot - multiple Unraid API instances running at once ### This version adds: - ability to delete your flash backup files from the cloud - do not suppress the underlying problem when flash backup is rate limited - reset flash backup errors when plugin is updated - remove flash backup "reinitialize" option, is replaced by deactivate/activate - new method of detecting flash backup firewall issues
    2 points
  3. Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone. What I've done is added to my mount script: --uid 99 --gid 100 For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision. I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again. After I did this once, I never had the issue again.
    2 points
  4. The negatives of the small case are: restrictive air flow that will inevitably affect HD temps very limiting expansion possibilities requirement for the usually more expensive mini ITX motherboards which normally come with no more than 4 SATA ports, and those boards that do come with 6 are impossible to find. A better choice would be to get a spacious, NAS friendly case e.g. Fractal Node 804 which features ample cooling, accommodates cheaper mATX motherboards, many of which have 6 or even more SATA ports and designed to hold at least 8 HDs. It also would be better to start with just 2 HDs (parity and data) and get the biggest ones you can afford. Multiple smallish HDs are not desirable. They consume extra electricity, use up limited SATA ports and increase number of points of potential failure. The usage of a HBA card would add another device using non-trivial amounts of electricity. If from the get go you structure your system with bigger drives and at least 6 SATA ports, then you probably won't even need to consider having HBA for a long while. Any modern Intel i3 chip with iGPU will serve your needs just fine.
    2 points
  5. Hi everyone, sorry for my absence here. I appreciate that it has not been working since 6.11, I will produce a final update to fix the issue at some point but I don't have much time at the moment with other projects and my job. Then I will deprecate it as I no longer have the time to work on this. I won't remove it from the app store as there are plenty of people who still need it's functionality and although i haven't been active here I have been somewhat active on the discord still. I may leave it active until unraid make their official API available. Plus it still works on older unraid versions. The application is open source so others are welcome to make changes as well!
    1 point
  6. Nicht das Dateisystem ist verschlüsselt, sondern die Festplatte selbst wird mit LUKS verschlüsselt, in der dann ein beliebiges Dateisystem liegt. Das Dateisystem ist also Schnurz. XFS gilt als sehr stabil bei Abstürzen und Stromausfall. Bei BTRFS ist der Vorteil, dass bei einem Datenverlust sogar Dateinamen wiederhergestellt werden können. BTRFS ist aber Copy on Write und fragmentiert daher stark. Bei Pools ist BTRFS alternativlos.
    1 point
  7. I was able to solve this using the thread https://forums.unraid.net/topic/102194-priority-of-active-backup-bond/. Linking here so others may benefit.
    1 point
  8. Just wanted to close the loop on this thread in case anyone else comes to it. I had the same question/problem, and did exactly as @jonp suggested and it worked perfectly for me. Thank you.
    1 point
  9. Was hab ich gesagt. Einer schreit und der Rest guckt in die Röhre. ASPM ist übrigens Teil des PCIe Standards seit 2.0 und Geräte, die es nicht unterstützen, müssen es dem BIOS / UEFI einfach nur mittteilen. Und dass man es standardmäßig auf L0 stellt ist ja ok, aber die Option aus dem UEFI zu entfernen, ist ja wohl ein Witz. Kontron verkauft an Profis und keine Fertig PCs, wo der Endkunde nichts einstellen soll.
    1 point
  10. With the new version the way how you add servers has changed. You should read the documentation, it’s pretty simple. https://discordgsm.com/guide/commands You add server in discord directly via chat commands
    1 point
  11. Those settings aren't affected. You shouldn't notice any change at all.
    1 point
  12. Ich hatte das auch vor ein paar Monaten. Zwei, drei Container waren angeblich "nicht verfügbar". Habe dann von der Basic in die Advanced view umgeschaltet und ein "force update" gemacht. Dann waren sie wieder "verfügbar". Was genau das Problem war, kann ich nicht sagen. Bisher nicht wieder aufgetreten...
    1 point
  13. I just wanted to chime in here and thank ich777 for his work on this AMD Vendor reset bug. I recently got a second hand 5600XT for my build (already have a 3080) and got hit with this reset bug. After some research, I installed this plugin and it did not help for the first few tries. What ended up working was not passing the Sound card along with the GPU to my Linux VM and then I could start / stop the VM without getting the reset bug. So thank you for this ich777, I really appreciate it.
    1 point
  14. You can see the specs for my system below, in my siggy. It currently runs 19 dockers and 1 VM (Home Assistant), all on a very modest 1st gen Ryzen 1500X and 16GB DRAM in a mid-tower case. I've done some torture testing, running a parity check while streaming/transcoding a couple of movies and downloading files, and it didn't miss a beat. So unless you have some known scenario where you are really going to ask for more than your system can handle, one system should suffice. As Vr2lo wrote, I would be more concerned that the network would end up being the bottleneck, especially when serving 4K content.
    1 point
  15. Please create you filters and then exit the application once from within Thunderbird "File -> Quit/Exit". If you are restarting the container from the Docker page within the Unraid WebGUI it won't save it.
    1 point
  16. Ah, I forgot, also the mount_mergerfs or mount_unionfs folder should be fixed with permissions. I don't know whether the problem lies with the script of DZMM. I think the script creates the union/merger folders as root, which causes the problem. So I just kept my union/merger folders and also fixed those permissions. But maybe letting the script recreate them will fix it. You can test it with a simple mount script to see the difference, of course. It's sometimes difficult to advise for me, because I'm not using the mount script from DZMM but my own one, so it's easier to for me to troubleshoot my own system.
    1 point
  17. This is now fixed please pull down latest image, you can then use spaces again in your WEBPAGE_TITLE value if you so desire.
    1 point
  18. Why? Please try it I'm really curious, never heard of such an issue like you are experiencing. This is not the right terminology, the older driver is still from the production branch and that a new driver is released doesn't mean automatically that 515.86.01 is deprecated... This is the new driver yes, I compiled it and uploaded the package to GitHub so that is available and in the plugin for users to download and install on Unraid.
    1 point
  19. 1 point
  20. That was the solution. Thank you SlrG!
    1 point
  21. No need to trim any btrfs pool since it uses the discard=async mount option
    1 point
  22. Thanks, worked for me! @advplyr, you might want to update your readme with this conf, it's much more Swag conformant
    1 point
  23. Upgraded to 525.60.11 tonight and was able to successfully transcode all videos. So, I'm not sure if r515 is a bad set of drivers for my card or if there was another underlying issue that prevented the previous driver installations from running. Either way, with successful r525 drivers, I'm not attempting to install r515 and risk it again. Considering this closed. Appreciate your patience @ich777
    1 point
  24. You're making it too complicated. Run newperms script with the parameter of the directory you want it to start in: /usr/local/sbin/newperms /mnt/user/backups-gdrive
    1 point
  25. Ok, I loaded up a newer usb drive and this boots. So it has something to do with the old usb drive I have been using. Will now look into how I would move my install onto this new drive.
    1 point
  26. edit your actual flippinturt template (by left clicking on the container and go to edit) then replace the image as shown bellow and the docker hub url as well : https://hub.docker.com/r/devzwf/pihole-dot-doh
    1 point
  27. Here is the solution from my friend @halfelite <-- BIG thanks! File /etc/default/nfs needs to be modified to add the following line: RPC_NFSD_OPTS="-u" This would enable NFS UDP back. To make it survive the reboot, add the following line to the go file: echo "RPC_NFSD_OPTS=\"-u\"" >> /etc/default/nfs
    1 point
  28. diese entry modelle von schneider haben aus meiner sicht ein riesen problem (ausser das plv): es ist nicht vorgesehen das man den akku tauschen kann. natürlich gehts trotzdem wenn man nicht ganz ungeschickt ist aber bemerkenswert fand ich es schon. wie ich sehe ist auch bei den aktuellen bx modellen nicht vorgesehen das der anwender den akku selbst tauschen kann... 😶
    1 point
  29. Please see @Squid's answer over on Reddit: Click
    1 point
  30. Awesome! Glad I found a temporary workaround and a clue to the resolution! @binhex and thanks for everything you do.
    1 point
  31. Good day, plugin also stopped working for me. root@Unraid:/usr/local/emhttp/plugins/gpustat# cd /usr/local/emhttp/plugins/gpustat/ && php ./gpustatus.php Fatal error: Uncaught TypeError: Argument 1 passed to gpustat\lib\Main::getParentCommand() must be of the type int, string given, called in /usr/local/emhttp/plugins/gpustat/lib/Nvidia.php on line 90 and defined in /usr/local/emhttp/plugins/gpustat/lib/Main.php:161 Stack trace: #0 /usr/local/emhttp/plugins/gpustat/lib/Nvidia.php(90): gpustat\lib\Main->getParentCommand('ffmpeg\x00-hide_ba...') #1 /usr/local/emhttp/plugins/gpustat/lib/Nvidia.php(355): gpustat\lib\Nvidia->detectApplication(Object(SimpleXMLElement)) #2 /usr/local/emhttp/plugins/gpustat/lib/Nvidia.php(250): gpustat\lib\Nvidia->parseStatistics() #3 /usr/local/emhttp/plugins/gpustat/gpustatus.php(63): gpustat\lib\Nvidia->getStatistics() #4 {main} thrown in /usr/local/emhttp/plugins/gpustat/lib/Main.php on line 161 root@Unraid:/usr/local/emhttp/plugins/gpustat# While the issue is resolved to revert to previous release, just delete the plugin and reinstall manually with below: https://raw.githubusercontent.com/b3rs3rk/gpustat-unraid/6cf1b1e96bc8cd5c1cf7ac8fefea1271d8891e26/gpustat.plg
    1 point
  32. I'm having the N/A issue. Running 'cd /usr/local/emhttp/plugins/gpustat/ && php ./gpustatus.php' gave a fatal error. Running 6.11.5, latest Nvidia and GPU Stat plugin. Plugin was working fine since it came out. While running the commands i've attached, there was an Emby transcoding going on. gpustat.txt gpustatus.txt nvidia-smi.xml
    1 point
  33. I appreciate you looking into this so quickly. Attached file contents, as requested. Going to be busy most of the day, but will check back periodically. gpustat.cfg
    1 point
  34. I am also experiencing this exact same issue. It started sometime on the 6.11.x branch, I think at .3. I just upgraded to 6.11.5, rebooted, did the check for updates, waited, then clicked update all. It does the same thing as above, goes through and successfully updates all of the dockers with updates, then instead of saying "done", wraps around and begins again. It'll keep doing this until I refresh the page. tower-diagnostics-20221124-2232.zip
    1 point
  35. Just had this occur to myself as well. Same symptoms as others have reported. Endless "Updating all Containers" loop. * Just upgraded to 6.11.5 this morning, no issues with upgrade * My docker install I have set to use directories, not a single docker img * Wen to Docker, clicked [check for updates], 6 containers with updatse * clicked [update all], update window/log appears * Initially pulled data for updates to containers, successfully restarted them, then went into a loop starting at the first container again and attempting to pull an update, found no changes, but continued to restart container, then move onto the next one. Diagnostics attached. Diagnostics were created from another tab while the system was still looping on trying to update the containers. I also saw that there was a container that wasn't updated (unfortunately don't know if it was part of the 6 that had updates earlier as I didn't catch it fast enough). Clicked "apply update" next to that container specifically and it only updated that one container, then showed [done] button. media-1-diagnostics-20221122-1213.zip
    1 point
  36. In the recent update, orphaned image is now automatically removed. Thank you. Is it possible to also update the local sha256 hash in this file /var/lib/docker/unraid-update-status.json? Since local sha256 is different from remote sha256 there, dockerman keeps showing update ready until I manually update the local sha256 hash. "library/traefik:latest": { "local": "sha256:2e53e47b59bc9a799b6c7b0d6d65f529de478094781751f1e061516ce9ca7c68", "remote": "sha256:ac1480ce3203541705b01d6dce40ef4bf563cdb29d5b00db88cc396fa9fa9cd5", "status": "true" }
    1 point
  37. For the folks that keep asking for a better way to handle package requests (and for the maintainer as well), you should probably handle those as feature request issues submitted via github on the repo.
    1 point
  38. I have a few requests for your consideration. First, I would really like to have the latest fish shell version included. I'm using the version from Masterwishx because they're a hero, but it would be even better if this was integrated. Secondly, a couple of packages that I like which aren't on the Modern Unix list: croc: croc is a tool that allows any two computers to simply and securely transfer files and folders. micro: micro is a terminal-based text editor that aims to be easy to use and intuitive, while also taking advantage of the capabilities of modern terminals. It comes as a single, batteries-included, static binary with no dependencies; you can download and use it right now! neovim: "Agressively refactored vim" Finally, let me link the contents of the (IMO) excellent Modern Unix list, with my personal thoughts in (parens). Items that are already in NerdTools are omitted. Of these, I would call bat, fd, ripgrep, sd, and zoxide my "essential" packages. bat: A cat clone with syntax highlighting and Git integration. exa: A modern replacement for ls. lsd: The next gen file listing command. Backwards compatible with ls. (Personally I prefer lsd to exa but they both are stepping in to the same role) delta: A viewer for git and diff output. dust: A more intuitive version of du written in rust. duf: A better df alternative. broot: A new way to see and navigate directory trees fd: A simple, fast and user-friendly alternative to find. (This one beats the pants off of find in many cases) ripgrep: An extremely fast alternative to grep that respects your gitignore. ag: A code searching tool similar to ack, but faster. mcfly: Fly through your shell history. Great Scott! choose: A human-friendly and fast alternative to cut and (sometimes) awk. jq: sed for JSON data. sd: An intuitive find & replace CLI (sed alternative). (Holy smackerel this thing slaps) cheat: Create and view interactive cheatsheets on the command-line. tldr: A community effort to simplify man pages with practical examples. (There are actually a number of implementations of this, tldr++ and tealdeer being some of the best imo) bottom: Yet another cross-platform graphical process/system monitor. glances: Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems. gtop: System monitoring dashboard for terminal. hyperfine: A command-line benchmarking tool. gping: ping, but with a graph. procs: A modern replacement for ps written in Rust. httpie: A modern, user-friendly command-line HTTP client for the API era. curlie: The power of curl, the ease of use of httpie. xh: A friendly and fast tool for sending HTTP requests. It reimplements as much as possible of HTTPie's excellent design, with a focus on improved performance. zoxide: A smarter cd command inspired by z. (I LOVE THIS ONE. Seriously any z jump implementation would be incredible, esp since we can define the database location to be on a persistent share. Makes your life so much easier.) dog: A user-friendly command-line DNS client. dig on steroids
    1 point
  39. For anyone coming here from google, you are supposed to install the virtio drivers on this screen. You have to start again to get here (you can hit escape on boot to get into bios -> boot manager -> dvd) Then on the next installation NetKVM -> w11 -> amd64 -> pick first
    1 point
  40. Thanks for letting us know you're still having issues. I am seeing intermittent errors as well but after retrying a few times it always connects. We are going to do another round of server side tweaks before asking for your logs. In the meantime, if your online flash backup is out of date please make a manual backup of your flash drive by going to Main -> Boot Device -> Flash -> Flash Device Settings -> Flash Backup
    1 point
  41. Enable the FTP server, then check the log, that's all I did.
    1 point
  42. #!/bin/bash /usr/local/emhttp/plugins/dynamix/scripts/ftpusers '1' 'root'
    1 point
  43. I added a script to enable FTP on boot, seems to be working so far. Enabled FTP Server under "Settings, FTP Server". Check the system log and read the line that Enabled the FTP Server, for mine it was "/usr/local/emhttp/plugins/dynamix/scripts/ftpusers '1' 'root'" Added a new script using the User Scripts plugin, and added that data, and set it to run at "At Startup of Array"
    1 point
  44. For anyone that finds this in a search, I used jdupes included in nerdpack/nerdtools. Command I used was: jdupes -QrL -X size+=:100000k /mnt/user/Media/ get rid of -Q(quick, uses file hashes instead of direct binary comparison) if you don't care if it takes longer or your data is nuclear launch codes or something.
    1 point
  45. Hi all, A quick update on this, I believe I have solved the issue. What I did is: Triple-check my cache settings (Yes, Prefer, Only etc) and the various path mappings for my dockers : OK Stop the docker service Invoke the Mover manually and let it do its job Restart docker and its containers Observe - all good! So I believe I somehow had a misconfiguration at the beginning of my unRAID installation and configuration, and then transferring a few TB worth of data through the cache (rookie mistake), the cache drive filling up, and some "Prefer cache" shares being offloaded to the hard disks because of lack of space. Now, all looks good!
    1 point
  46. @binhex@jonathanm I'm having trouble with the local client. No issues with the docker, that all works fine. I ran the script from here. I can see the binary created on the unraid server in /usr/local.bin and I can execute the bin file but I am jsut not sure what parameters need to be included. PS: my client is not discoverable in the docker GUI. /usr/local/bin# ./urbackupclientctl start -c 172.16.1.66 -f Error starting backup. No backup server found. I should also add I have a dual-homed system. eth0.10 = private lan to VPN outbound (default system route) Eth1 = Plex an a few other dockers on the user lan I control each dockers access by choosing which network. UrBackup is on br0.10 - 172.16.1.0/24 Thanks in advance [edit] Fixed. Allowed Host access to Custom networks in the Docker setup. [How-to] 1. Install and start URbackup client TF=$(mktemp) && wget "https://hndl.urbackup.org/Client/2.4.11/UrBackup%20Client%20Linux%202.4.11.sh"> 2. CD to URbackup client path cd /usr/local/bin 3. Add Directory to be backed up urbackupclientctl add-backupdir -d /mnt/user/Videos urbackupclientctl add-backupdir -d /mnt/user/Pictures 4. Start backup urbackupclientctl start -f
    1 point
  47. Thanks for the quick response. When I do a manual join, I get access denied / # ./zerotier-cli join xxxxxxxxxxxxxxxx 200 join OK / # ./zerotier-cli listnetworks 200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips> 200 listnetworks xxxxxxxxxxxxxxxx 62:81:eb:a9:69:bf ACCESS_DENIED PRIVATE zt0 -
    1 point