Leaderboard

Popular Content

Showing content with the highest reputation on 01/31/20 in all areas

  1. But I currently have no use for multiple array support. I'd much rather have HA and replication as core features. But above that, I'd like GPU drivers baked in and officially supported. May I suggest cheering in the multiple array thread instead of jeering in the GPU one?
    2 points
  2. PSA The soon to be released update to CA will be the last version of CA to fully support unRaid v6.4.0 to 6.6.7. In the course of doing a minor display change, I had a feeling that the change might not be compatible with the older versions of unRaid so I installed 6.5.3 to test and there were problems with implementing those changes on the older versions of unRaid's GUI. If / when another GUI change happens, then at that point I will be making CA require unRaid 6.7.0+ and will remove all the old compatibility code. This change will not however impact existing installations of CA on those legacy versions of unRaid. Those installations will continue to operate normally, but will cease to receive any further bug fixes or feature improvements.
    2 points
  3. Be sure you do this: https://forums.unraid.net/topic/53172-windows-issues-with-unraid/page/4/?tab=comments#comment-758464
    1 point
  4. There is a hidden character that gets copied over from the forums. It is a known issue. Get notepad++ and do 'show all symbols' to see.
    1 point
  5. I think the better feature request would be DKMS support for OOT drivers. That would allow LS.IO (or anyone really) to build DKMS packages for basically any hardware without requiring a full rebuild of the bzimage.
    1 point
  6. That's relatively normal and usually no reason for concern, that's why it appears as "info", not "warning" or "error", and it usually goes way by itself, doubt that's the reason for the reported issues.
    1 point
  7. Gift sent to your steam ID. Enjoy Days of War.
    1 point
  8. Yep, this container should be no problem after a look at the Admin Page. Yep exactly. I really appreciate that!
    1 point
  9. Had exactly the same experience, and actually messed it up more than you did of which I'll spare you the details, let me just say I'm now at my third email account at Plex 🤨 which is kinda weird since I'm a Plex user since about day 1 when they forked from XMBC (now Kodi). Not sure if that says more about Plex and how it works ... or me 😁 The only difference I see between the Binhex/Linuxserver docker templates is indeed that you have to fill in a claim code after retrieving it through https://www.plex.tv/claim/ after which you go through the Plex setup wizard to set a name for your server, add libraries and so on. For the binhex/linuxsever dockers you never arrive at that, you just land in the Plex web app without a way to access the server.
    1 point
  10. Yeah, let's just get rid of docker, VMs, the whole plugin system ... all that stuff that's not "genuine NAS", whatever that means.
    1 point
  11. Because for reasons known unto themselves sometime manufacturers put less sectors onto an external drive. Since you've got no parity currently, your easy solution is to copy the files from one of the data drives to the shucked drive, and then make the original data drive the parity.
    1 point
  12. That could be said about any feature request. You may not find it useful, but many others would. For this specific feature, I think some of you may be conflating the burden for Linuxserver.io to do each Unraid Nvidia release with what it would be for Limetech. The burden on Limetech would be many orders of magnitude less, which is why this feature would be beneficial.
    1 point
  13. If you just put your USB-stick with the unraid installation in your computer, and access the config file at /boot/config/wg0.cfg you can remove the network from peer allowed IPs again, to see if that fixes it. Not sure why it'd cause your admin panel to become unreachable however, but either way, this way you can still access all config files from unraid. As well as some log files.
    1 point
  14. Please add GPU drivers to unraid builds. Doesn't Limetech already do this for some NICs, SAS controllers, etc? If so, I see no reason why GPU drivers (specifically for Nvidia in my case) shouldn't be added as well. There's no need to keep up to date with the latest driver, unless there are serious bugs/exploits that need to be patched, just like with any other driver Unraid uses. I really don't get all the "sky is falling" negativity surrounding this request. The work involved for Limetech to include these drivers is far less than for the Linuxserver.io team (and a huge thanks to each of them for that effort!) to add them after the fact.
    1 point
  15. Here is what I came up with.
    1 point
  16. That's one of the reasons USB enclosures are not recommended, some use a smaller partition layout, and that won't be accepted by Unraid, you should be able to rebuild one disk at a time, assuming single parity, you can test first and make sure the emulated disk is mounting correctly, if yes rebuild on top with the disk connect via SATA.
    1 point
  17. Thanks for the heads up, didn't think of testing that. Yeah I use letsencrypt and also cloudflare for dns verification. Unfortunately I'm getting the same error and on top of that a 524 error for timeout. Will try: 1. Adding the same variables to letsencrypt's nginx config (same path as nextcloud's). 2. Upload without having cloudflare in the way. I'll try to test it soon. I hope at least one of us can make this work.
    1 point
  18. remotely connected using wireguard, from my phone upgraded 6.8.1 to 6.8.2 without error. thank you unraid team!
    1 point
  19. I can note that I pre-cleared a 10TB WD RED Pro drive with preclear.disk-2020.01.11a (version 1.0.6) on 6.7.2 without any problems 2 weeks ago. Jan 11 17:14:46 husky preclear_disk_1EH4Z5MN[8086]: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdn Jan 11 17:14:46 husky preclear_disk_1EH4Z5MN[8086]: Preclear Disk Version: 1.0.6 ... Jan 12 21:32:34 husky preclear_disk_1EH4Z5MN[8086]: Zeroing: progress - 100% zeroed ... Jan 13 11:44:11 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: progress - 100% verified Jan 13 11:44:12 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: dd - read 10000831348736 of 10000831348736. Jan 13 11:44:12 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: dd exit code - 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 5 Reallocated_Sector_Ct 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 9 Power_On_Hours 42 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 194 Temperature_Celsius 35 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 196 Reallocated_Event_Count 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 197 Current_Pending_Sector 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 198 Offline_Uncorrectable 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 199 UDMA_CRC_Error_Count 0
    1 point
  20. This question should be posed to AMD.
    1 point
  21. Hi there, I was having the same problem, thanks for the tips. A little guide step by step for noobs (like me): Open a console (in docker) Turn on maintenance mode: sudo -u abc php7 /config/www/nextcloud/occ maintenance:mode --on Go to: cd /config/www/nextcloud/ Run the command: sudo -u abc php7 occ db:convert-filecache-bigint Turn off maintenance mode: sudo -u abc php7 /config/www/nextcloud/occ maintenance:mode --off Restart the container Cheers
    1 point
  22. And if / when unRaid adds built-in GPU drivers for nVidia / AMD, then we'll see updates to the OS every other day as people will begin screaming why doesn't unRaid have the latest version that was released an hour ago?
    1 point
  23. Check the contents of the ovpn file and make sure it has a 'remote' line defined Sent from my CLT-L09 using Tapatalk
    1 point
  24. I've been doing this for a long time now via command line with my important VM's. First, my VM vdisk's are in the domains share, where I have created the individual VM directory as a btrfs subvolume instead of a normal directory, ie: btrfs subv create /mnt/cache/domains/my-vm results in: /mnt/cache/domains/my-vm <--- a btrfs subvolume Then let vm-manager create vdisks in here normally and create your VM. Next, when I want to take a snapshot I hibernate the VM (win10) or shut it down. Then from host: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup Of course you can name the snapshot anything, perhaps include a timestamp. In my case, after taking this initial backup snapshot, a subsequent backup will do something like this: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup-new Then I send the block differences to a backup directory on /mnt/disk1 btrfs send -p /mnt/cache/domains/myh-vm/backup /mnt/cache/domains/myh-vm/backup-new | pv | btrfs receive /mnt/disk1/Backup/domains/my-vm and then delete backup and rename backup-new to backup. What we want to do is add option in VM manager that says, "Create snapshot upon shut-down or hibernation" and then add a nice GUI to handle snapshots and backups. I have found btrfs send/recv somewhat fragile which is one reason we haven't tackled this yet. Maybe there's some interest in a blog post describing the process along with the script I use?
    1 point
  25. just reboot the docker if you notice it suck. That sorts its life choices out.
    1 point
  26. Yes, something like this: https://github.com/PizzaWaffles/Automatic-Youtube-Downloader I found here: Anyone interesed in making a Docker? That's something I would definitely consider donating to!
    1 point
  27. ive tried multiple changes to try and work out what is causing this and as of yet i still havent found the root cause, so at the moment this is still an issue.
    1 point
  28. To your earlier question (which you may have edited out), I use the plugin "CA User Scripts," which you can install from the Community Applications section. Once you have it installed, you just make another script called "NZBGet Restart" and have it say: #!/bin/bash docker restart binhex-nzbget And then you can set it to whatever frequency you want.
    1 point
  29. Good evening! I'd like to not only request this as a feature, but explain how one could implement this on their own! Basically, the idea is that all user customization done via configuration files located in "/root" are lost on each boot. I know this is intentional, but there's an "easy" way to implement this with clever failsafe mechanics. I also know that one can work around this by adding a couple of lines to /boot/config/go, and storing the configuration files on the flash drive. This isn't as desirable as Fat32 doesn't properly handle Linux permissions, and can require other manual edits to the go file down the road. Enter OverlayFS (a feature built into the Linux kernel for eons) First we create the container for our data. I use the truncate command as it is safe and "quick" (note: we are writing over USB so this step will take time no matter which option we use) truncate -s 4000M /boot/config/root.persist I chose to go with 4000M as it is close to the Fat32 ceiling of "4gb" (note: if you specify 4G you will receive an error) Next we format that image, and set up some important directories within it: mkfs.ext4 /boot/config/root.persist mkdir /tmp/overlay mount /boot/config/root.persist /tmp/overlay mkdir /tmp/overlay/upper mkdir /tmp/overlay/workdir Finally the special sauce that overlays the image we created on top of the normal unraid /root/ directory: mount -t overlay -o lowerdir=/root,upperdir=/tmp/overlay/upper,workdir=/tmp/overlay/workdir none /root Anything written to /root/ after this command is run will actually be writting to /tmp/overlay/upperdir, and permanently stored there. The lowerdir will never be modified in this situation as it isn't addressable since we are placing the overlay on top of lowerdir. And to make it persistent, we add this block to /boot/config/go: if [ -f /boot/config/root.persist ]; then mkdir /tmp/overlay mount /boot/config/root.persist /tmp/overlay mount -t overlay -o lowerdir=/root,upperdir=/tmp/overlay/upper,workdir=/tmp/overlay/workdir none /root fi A couple of notes: The if statement above makes sure that we don't try doing anything if there isn't a persistent image for the root folder. It's kind of redundant (the first and second mount commands will just fail and regurgitate errors if the file isn't there) but I prefer a clean console log. If the image becomes corrupt, or unusable you can safely discard it this way. Safe mode shouldn't use /boot/config/go so if anything goes wrong safe mode will undo any of the changes contained in the image. Meaning you can boot into safe mode, manually mount the image, and undo whatever you did in upperdir and be back up and running. I'm not sure what you could do to cause those sorts of things. This also allows for: Persistent bash history (forget that command you ran before you rebooted? No more.) Persistent config file storage (tmux preferences, terminal colors, and htop profiles? Oh my.) Persistent KNOWN_HOSTS and AUTHORIZED_KEYS for ssh. Anything you would normally want a home directory to be useful for in LinuxLand.
    1 point
  30. I also wanted to get the ZFS Event Daemon (zed) working on my unRAID setup. Most of the files needed are already built into steini84's plugin (thanks!) but zed.rc needs to be copied into the file system at each boot. I created a folder /boot/config/zfs-zed/ and placed zed.rc in there - you can get the default from /usr/etc/zfs/zed.d/zed.rc. Add the following lines to your go file: #Start ZFS Event Daemon cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/ /usr/sbin/zed & To use built in notifications in unRAID, and to avoid having to set up a mail server or relay, set your zed.rc with the following options: ZED_EMAIL_PROG="/usr/local/emhttp/webGui/scripts/notify" ZED_EMAIL_OPTS="-i warning -s '@SUBJECT@' -d '@SUBJECT@' -m \"\`cat $pathname\`\"" $pathname contains the verbose output from ZED, which will be sent in the body of an email alert from unRAID. I have this set to alert level of 'warning' as I have unRAID configured to always email me for warnings. You'll also want to adjust your email address, verbosity level, and set up a debug log if desired. Either place the files and manually start zed, or reboot the system for this to take effect. Pro tip, if you want to test the notifications, zed will alert on a scrub finish event. If you're like me and only have a large pools that takes hours/days to scrub, you can set up a quick test pool like this: truncate -s 64M /root/test.img zpool create test /root/test.img zpool scrub test When you've finished testing, just destroy the pool.
    1 point