Leaderboard

Popular Content

Showing content with the highest reputation on 07/02/21 in all areas

  1. @bonienl I think I've pointed this out before, but some fun points. Containers can use SLAAC (as advertised by a router) instead of DHCPv6 since some routers (Mikrotik ones in particular) do not support DHCPv6 completely - only SLAAC This approach is extremely useful when your ISP doesn't even consider assigning you a static prefix and just delegates an entire /56 to you dynamically and you can configure your router to dynamically advertise the prefix Docker networking in IPv6 wants a static prefix or you will be restarting the docker network whenever you need the prefix to change In order to use SLAAC, the docker custom network does not need to have IPv6 enabled (or the interface for that matter) To configure a container to enable SLAAC, you then need to pass the extra parameters specified --sysctl net.ipv6.conf.all.disable_ipv6=0 The container will then have its own IP address based on what the network is advertising (again SLAAC) root@MediaStore:~# docker exec nginx ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 75: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:c0:a8:5f:0a brd ff:ff:ff:ff:ff:ff inet 192.168.95.10/24 brd 192.168.95.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fd6f:3908:ee39:4001:a170:6031:f3df:e8b/64 scope global secondary dynamic valid_lft 3296sec preferred_lft 1496sec inet6 fd6f:3908:ee39:4001:42:c0ff:fea8:5f0a/64 scope global dynamic valid_lft 3296sec preferred_lft 1496sec inet6 fe80::42:c0ff:fea8:5f0a/64 scope link valid_lft forever preferred_lft forever root@MediaStore:~# docker exec nginx ip -6 route fd6f:3908:ee39:4001::/64 dev eth0 metric 256 expires 0sec fe80::/64 dev eth0 metric 256 multicast ff00::/8 dev eth0 metric 256 default via fe80::ce2d:e0ff:fe50:e7b0 dev eth0 metric 1024 expires 0sec As I don't have DHCPv6, I think this approach will work for DHCPv6, but only if the container is designed to do DHCPv6, otherwise docker will not do DCHPv6 (it doesn't do DHCPv4 either) and instead has an internal IPAM (IP address management) which will simply assign addresses from the configured pool in the docker network.
    2 points
  2. On episode 5 of the Uncast pod, join @jonp for a sneak peek at some of the security changes being made in Unraid 6.10. Also, there is a deep-dive discussion about 10gbps performance over SMB on Unraid. Need a refresher on Unraid security best practices? Be sure to check out the security best practices blog. Curious about SMB Multichannel Support? Check this out!
    1 point
  3. Ja USB Controller sind unvorhersehbar in ihrem Verhalten. Ich hatte auch eine Zeit lang zwei Platten über USB im Array. Manchmal hatte ich bei den Parity Checks Fehler, manchmal nicht. Und schlafen schicken ging quasi gar nicht. Bin froh jetzt alles über SATA angeschlossen zu haben. Lieber eine SATA Karte als USB (denn denn ein Slot frei ist)
    1 point
  4. With this case I would definitely do the cooling mods documented in these forums and others on the Internet. Disks can get really hot in this case as the airflow is very poor without the mods. I have the larger Silverstone CS380 case and I did some mods to improve airlfow through disk cages on this case as well.
    1 point
  5. I personally would not vouch for/select a gaming MB for a NAS, but technically this should work, I think. ...there is a difference in the features of "pools" and "the array" in unraid. You should read the docs/faqs and get familiar with the differences. The set of disks, build/assembled from disks of different sizes and with up to two parity disks, is the array. While pools are used as individual (write-)cache or storage disks in different parts of the NAS Shares filesystem....mostly build from a btrfs raid1 set each and with SSDs/NVMes. As they are faster and less energy hungry, this allows for letting the larger array disks sleep and save energy (and noise) until needed. It also allows to store frequently used or changed data in pools (like Dockers or VMs vdisks). there is no Raid in unraid...also no striping Each data disk in the array is an individual disk, with a filesystem on its own. Each file will only reside on a single disk. There is an overlay filesystem accross all these disks in the array, making it look like one large filesystem. Based on a set of rules/configuration unraid decides where a file gets stored (on what disk). The more classic concept of resilvering does not apply in unRaid, especially not when you want to replace an existing data disk with a larger one. The closest thing to resilvering is, when a disk dies or goes missing and will be rebuild from parity once restored/replaced. But yes, the use-case that you depicted can be done, just not with a resilvering action, as you called it. See: https://wiki.unraid.net/Manual/Storage_Management#Replacing_disk
    1 point
  6. No rush mate. Thank you looking into this.
    1 point
  7. Thank you. If you need any other data, info, diagnostics, if I can help please feel free to reach out.
    1 point
  8. If it passed the extended SMART test it should also pass a preclear, you can run multiple passes but since the disk is now working it might reduce its life, I would just use it normally and monitor.
    1 point
  9. Emulated disk is mounting correctly, that why maintenance mode wasn't enough, disks look fine, you can rebuild on top and re-sync parity at the same time, since we can't see what happened it might be a good idea to replace/swap cables before doing it to rule that out if it happens again. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
    1 point
  10. Did what you said and filesystem on disk1 is all fixed! Thanks again for the great and speedy help
    1 point
  11. What issue - the disks being unmountable or disabled? These two states require different recovery actions. From the previous posts it sounds as if only the ones for the unmountable states have been done and clearing the disables states (which require rebuilds) is still outstanding.
    1 point
  12. argo tunnel is established via UUID and not IP or Ports so your 2nd docker config will have a different UUID hence it may work
    1 point
  13. Could be, but the OP mentioned assigning it to a Plex container, I was curious as to why - I have a few of those 1030s laying around so if there's a good reason to use them, I'm all ears
    1 point
  14. As a general point of reference for any future issues you may come across, it's always helpful to go to the "Tools" menu, then click on "Diagnostics" and download the zip file. The page there tells you exactly what is being collected, but in general, it includes all the configuration info that the experts here would need to help diagnose setup issues like this. Glad you got this resolved so quickly and welcome to the Unraid family!
    1 point
  15. It did. Now it seems working (never had it working longer than 5min in the past, now 30min up & running)! Thank you very much 🙂
    1 point
  16. Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
    1 point
  17. by me replying and you replying we have it here in the thread. it can be found now if people search. ¯\_(ツ)_/¯
    1 point
  18. Looks like you must have rebooted since the disk became disabled because I don't see anything in syslog about that. Very likely and may be the most frequent cause of threads like this from new users. SMART for that disk looks OK and as you say it passes extended test. You can rebuild the disk to itself. You should double check all connections first. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
    1 point
  19. I think it would be useful if disk rebuilds were logged in the Parity Check history. I think it would be even better if they could be kept in a separate category, but that's probably not possible, so a simple entry there would be fantastic. Also, bug report filed, so I'm marking this one "resolved", since it seems nothing else can be done from this end.
    1 point
  20. it is covered here in the 0nline documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.
    1 point
  21. got the NR12000 setup. E3-1230v2, 8GB ram. with 1x 2.5" HDD (boot, apps) and 12x 4TB Seagate ST4000NC000 drives, I'm drawing 0.85A. I think I'm buying another, as this is CRUSHING the post cost of MD1000.
    1 point
  22. I was as well, my solution was to map all .../Saved/ folders to separate volumes (ARK1...,ARK2...,ect). Extra benefit is that each ark can use different configureation, I have ARK1-TheIsland dialed way down in difficulty for the new players and then ramp up over Scorched earth and Aberration... The down side is each server gets its own configuration, so want to make a "simple" change, yay get to do it 10 times! Currently the entire are-se folder in appdata is 27GB for all 10 servers, plus I have the a3c keeping 10 days of backups for all arks that totals just over 9Gb and SteamCMD is about 1G. So less than 38G total disk space for my cluster, but they use nearly 80Gb of RAM while running <shrug>
    1 point
  23. Plotting complete for me, one month to complete 1181 plots on existing old hard drives. 90% of these drives have CRC & Bad sectors on them so i don't trust them for real data, but for chia i don't care if it fails. lets earn some Chia hopefully.. Note: Disk 1 is for surveillance only, no chia Thanks Machinaris for making it easier!!!!
    1 point
  24. Wenn du es vergessen hast bekommst du es nicht wieder. Ich bin mir nicht ganz sicher aber öffne mal eine Console vom container (auf das Symbol klicken und dann Console auswählen) und gib ein: photoprism passwd Danach solltest du dein passwort ändern können.
    1 point
  25. Hi everybody. I'm still using this container through browser extension (it works fine). I only use it within my lan, or at the most while I'm connected through a VPN. Not really want (or have necessity) to open any port and expose it to the internet. I can't, however, access the admin page for managing users and collection: it requires HTTPS. I've read about using reverse proxy, but don't know anything about it (since, again, I just use a VPN). Can somebody make a noobie proof guide for setting it up for local access only?
    1 point
  26. If you would have read the link Squid posted, the answer was here all the time, the issue was addressed and solved a couple weeks ago. https://forums.unraid.net/topic/108643-all-docker-containers-lists-version-“not-available”-under-update/?tab=comments#comment-993588
    1 point
  27. Can anyone that has upgraded to 6.9.2 confirm that upgrading alone is supposed to fix the problem? I just upgraded and the problem persists. EDIT: Fixed. In case anyone else comes along and has the same question / issue that I did, hit the "Check for Update" button on your docker tab. (Not sure if it matters if the dockers are running or not, mine were when I hit it)
    1 point
  28. Thanks for the superb script and tool. Makes me feel much better about my data and set up. I made a minor adjustment to the script to change the $notify type in case there are infected files found the allowable values are normal|warning|alert for a green amber or red notification respectively... it might help somebody... #!/usr/bin/php <? $notify="normal"; exec('/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan Started" -s "Antivirus Scan" -d "Antivirus Scan Started" -i '.escapeshellarg($notify).''); exec('docker start ClamAV'); for ( ;; ) { $status = trim(exec("docker ps | grep ClamAV")); if ( ! $status ) break; sleep(60); } exec("docker logs ClamAV 2>/dev/null",$logs); foreach ($logs as $line) { $virus = explode(" ",$line); if (trim(end($virus)) == "FOUND" ) { $infected .= "$line\n"; $notify="warning"; } } if ( ! $infected ) { $infected = "No infections found\n"; } exec('/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan Finished" -s "Antivirus Scan" -d '.escapeshellarg($infected).' -i '.escapeshellarg($notify).''); ?>
    1 point
  29. 我也没发装CA 家里的电信网络不行,公司的网通就毫无问题,后来终于搞好了 首先设置host echo "# GitHub Start" >> /etc/hosts echo "52.74.223.119 github.com" >> /etc/hosts echo "192.30.253.119 gist.github.com" >> /etc/hosts echo "54.169.195.247 api.github.com" >> /etc/hosts echo "185.199.111.153 assets-cdn.github.com" >> /etc/hosts echo "199.232.68.133 raw.githubusercontent.com" >> /etc/hosts echo "151.101.108.133 user-images.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 gist.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 cloud.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 camo.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars0.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars1.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars2.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars3.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars4.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars5.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars6.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars7.githubusercontent.com" >> /etc/hosts echo "151.101.76.133 avatars8.githubusercontent.com" >> /etc/hosts 然后 替换CA的include的paths.php 用国内的CDN源,记得chmod +x docker镜像去阿里云免费搞个镜像源 最后下载userscript,自己搞个脚本 array第一次启动的时候把以上的事情做一遍。。 搞好后下载docker查docker images 湿滑。。。 不过unraid文档的确太垃圾,我现在还在爬怎么写插件的。。真是连个统一的文档都没有,整的和小作坊一样。。 paths.php
    1 point
  30. So I had to bounce around quite a bit here to figure out how to get this to work *EASILY* with a Windows 10 install. I figure I'd post what worked for me. This is largely based off what zakna did with a few additional steps and a little clarification. 1. Follow his guide up to here: That did not work for me. It gave me a double // in the command to find the WinPE files. I used this and it worked: set win_base_url ${live_endpoint}/WinPE 2. WinPE gave me some headaches until I realized the folder you produce when you create the WinPE files (\media) contained all the actual WinPE files. Duh. So your file structure should be \WinPE\x64\ALL THE FILES FROM \media\ WHEN YOU CREATE WinPE. It should have a bunch of folders (bg-bg, Boot, EFI, etc) and bootmgr and bootmgr.efi. That got me actually booting into WinPE. I did not add Powershell to my WinPE. It was not needed. 3. Once you are booting WinPE you can add a few files to your system to automatically start the Windows install. You will need to create two files. First create "winpeshl.ini". In that file we tell WinPE to run the next file we'll create. So add this to winpeshl.ini [LaunchApps] "install.bat" Save that file and create a new file called "install.bat". Add this to install.bat: wpeinit net use \\YOUR_UNRAID_IP\isos \\YOUR_UNRAID_IP\isos\win10\setup.exe That is assuming you keep your OS Isos in a similar directory. Make sure the net use command is the directory proceeding your extracted Windows 10 directory. Also for some reason it didn't like spaces in my Windows install directory (it was Windows 10 before I changed it to win10). Just something to be aware of. Now take the two files you created and copy them to your "\appdata\netbootxyz\menus" folder. In that folder is the windows.ipxe file. Edit that file. Scroll down to the end of the file and edit the file to include two lines right after "kernel http://${boot_domain}/wimboot": initrd install.bat http://${boot_domain}/install.bat initrd winpeshl.ini http://${boot_domain}/winpeshl.ini Save that file, boot from your network and start the Windows installer. It should take you right into a standard Windows 10 install.
    1 point
  31. Great job, thank you! I just successfully booted win10 installation image. Few steps, but took me couple of hours to make it work. Here how i configured it: - installed netbootxyz docker (used default settings + mapped "/asset" to my image folder "/mnt/cache/appdata/ISO") - configured my pfsense FW/router via webinterface. Went to services/dhcp server and specified the following options : IP Address of TFTP server (my unraid server IP) Enable network booting yes Next Server (my unraid server IP) Default Bios file name (netboot.xyz.kpxe) - configured NETBOOT.XYZ via webinterface (http://unraid_IP:3000/) - in the file "boot.cfg" located under "Menus" updated variables "live_endpoint" and "win_base_url" as follow: set live_endpoint http://unraid_IP:8080 set win_base_url ${live_endpoint}/WinPE/ - downloaded official win10 x64 iso file and extracted it via 7zip to my image folder (/mnt/cache/appdata/ISO/Win10_1903_V2/x64/). Files had to be place to subfolder x64. note: with normal Win10 iso everything went fine except when I got to the actual install phase, I was getting "A required CD/DVD drive device is missing. If you have a driver floppy disk, CD,DVD, or USB flash drive, please insert it now" message. WinPE image had to be used instead and from WinPE then install any win iso you need. - created Win PE with integrated PowerShell (PowerShell needed for mounting iso images) as per instructions: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-adding-powershell-support-to-windows-pe - uploaded the Win PE to my image folder (/mnt/cache/appdata/ISO/WinPE/x64/). Files had to be place to subfolder x64. - powered on client PC and forced it to boot from LAN. It booted up to netboot menu where navigated to windows disro - booted WinPE, mounted required win 10 iso file and started the installation: wpeinit (initialize the network) net use z: \\10.0.0.16\appdata\ISO (map z: drive to my samba image folder with official Win 10 image "Win10_1903_V2.iso") PowerShell (run powershell) Mount-DiskImage (mount the official iso win10 file located at z: drive z:\Win10_1903_V2.iso) d:\setup.exe (run win10 installation from the mounted ISO) useful links: https://ipxe.org/howto/winpe https://ipxe.org/wimboot
    1 point
  32. What seemed to fix it for me was clicking on the left of the window at the top where it showed me the cookies I removed them all and then restarted it and it seemed to work again Hope this helps someone else
    1 point
  33. AFAIK this doesn't exist. The whole concept of yanking a drive out and the array rebuilding itself seems like a half-step and leaves the array in limbo during rebuild if I understand how it works correctly. Why not have a 'Move data off disk' button, or more automated sounding 'Remove disk from Array', somewhere so that the rebuild takes place before a drive is physically removed? IIRC WHS v1 had something like this.
    1 point
  34. Managed to work this one out. In /boot/config/docker.cfg, I added the line DOCKER_OPTS="--bip=172.31.0.1/16" This will force docker to use the IP address and range specified for the network bridge.
    1 point
  35. Also a screen session can cause this - easy to forget they might still be hanging around IF you are sure nothing important has a share mounted (which could be anything, stuck session that is no longer, etc). I just ssh to my unRAID server and execute: ps -ef | grep /mnt/user I then take the PID and perform a graceful kill (I've only had to do kill -9 once) kill <PID> as soon as all mounts in the process go away, your array stopping will be successful.
    1 point