All Activity

This stream auto-updates

  1. Past hour
  2. Hi, Ive just upgraded my motherboard from an asusrock rack E3C246D4U to an asusrock rack W680D4U-2L2T/G5. In doing so I have also inserted a new nvme drive m2. All has been (reasonably) straightforward however despite being able to identify the m2 drive in the bios I am unable to see this drive once logged in. I am not sure if this is a vfio issue or otherwise however would appreciate some help as all of my googling has amounted to nothing. I have posted my diagnostics file below. Thanks in advance, Micci sosie-diagnostics-20240425-1432.zip
  3. Just a quick reply that Settings->Docker->"Host access to custom networks" setting to "Enabled" worked for me too. Now I just have to create entries for the other 20+ docker containers. ugh Note, I did NOT have to change custom bridge or IP settings. NPM is running in 'custom: br0' network type and it can see all the other docker containers in 'bridge' or 'medianet' networks just fine.
  4. Da kann ich mich mit meinen knapp 180TB verstecken 🤪 Bios Einstellungen wären spitze... Fettes Dankeschön
  5. Il faudrait voir s'il n'y a pas un driver spécifique pour ce modèle. On dirait qu'Unraid voit cette carte comme une legacy 1 Gb. Je viens de regarder en //, va voir ce lien : https://forums.unraid.net/bug-reports/stable-releases/610-rc3-6115-realtek-rtl8156-usb-25gb-nic-not-working-r1919/page/2/ ça semble correspondre à ton problème.
  6. I'm in the process of upgrading my Unraid server by replacing some drives in the array with bigger ones. That's what I've done for now: 1. I precleared all new drives before doing anything. Meanwhile, I ran a Parity-Check on the existing array just in case, which went fine: Parity-Check 2024-04-14, 04:33:07 18 TB 1 day, 10 hr, 25 min, 34 sec 145.2 MB/s OK 0 2. Since I had a 18TB parity drive and the new drives were all 20TB, I removed the 18TB parity drive and installed a 20TB drive, Parity-Sync went fine: Parity-Sync 2024-04-16, 05:27:27 20 TB 1 day, 15 hr, 24 min, 56 sec 141.0 MB/s OK 0 3. I removed one of my drives from the array (12TB) and I installed a 20TB drive, Data-Rebuild went fine: Data-Rebuild 2024-04-18, 08:02:15 20 TB 1 day, 14 hr, 40 min, 55 sec 143.6 MB/s OK 0 4. Next step, I removed another of my drives from the array (12TB) and I installed a 20TB drive. Again, Data-Rebuild went (apparently) fine: Data-Rebuild 2024-04-24, 12:09:44 20 TB 1 day, 11 hr, 7 min 158.2 MB/s OK 0 However, when I restarted the array the last drive that was rebuilt couldn't be mounted (the previous one -step 3- was fine) and logs showed an XFS error ("Corruption warning: Metadata has LSN ahead of current LSN unraid") and instructed me to "Please unmount and run xfs_repair". I reached the forum for some information on this error and I ran xfs_repair as suggested in many discussions: This seemed to fix the issue and starting the array would now mount all drives. I'm now running a Parity-Check and early in the process it showed 31 sync errors corrected, it was staying like that but now (~29%) it just jumped to 521770 sync errors corrected. I never had a single parity check error before (and the Unraid server has been running fine since September 2022 Should I be worried? Should I have done anything differently? Should I be doing something else now? I'm guessing I should let the Parity-Check process finish and then run it again (in non-correcting mode this time). If it doesn't show any errors I should be good, right? To be honest, I'm not sure why the second drive rebuild "failed" leaving the drive in an unmountable state (though xfs_repair seem to easily fix it and didn't find any major problems with it, AFAIK) and I also don't know why the Parity-Check should fail now (unless the xfs_repair changed something in the drive data that requires the parity to be adjusted). Any hints/suggestions are appreciated.
  7. Every rsync I can cause these. But the panic and crashes were hardware. I have a new cpu, Mobo and ram and no more crashes. The errors in ranch I have sort of living with now.
  8. Ich nutze den container nicht also kann ich dir da nicht weiter helfen. Ich würde mich aber an den Entwickler selbst wenden wenn im Docker container was fehlt. Google translate, ChatGPT oder Google Gemini hilft.
  9. Bleeding edge hardware on Linux in general is never the best choice, at least when it was just released. I think so, maybe @SpencerJ can put you on the list of testers, would be also interesting to me if everything is working properly. Also check back from time to time in the Announcements subforums or subscribe to the newsletter.
  10. Today
  11. Greetings wondering if you found a solution to the error:0 in libc-2.37.so. My system seems to become very unresponsive and am seeing these in my logs. Cheers!
  12. Well that'll teach me to buy the "latest and greatest" hardware. Can I be on the list to beta test or get an alert when the RC is avilable?
  13. Sharing my experience with the manual installation process from a linux OS distro: Ubuntu 22.04 Initially I had formatted my USB drive from the "Files" (nautilus --browser) aka file explorer gui application by right clicking the USB and choosing format "For all systems and devices (FAT)" I double checked the format from the "Disks" utility to find W95 FAT32 (LBA) (0x0c) - I changed this to W95 FAT32 (0x0b) (As of this moment, I'm not sure if this matters.) After extracting the download archive for 6.12.10 onto the USB, I copied the `make_bootable_linux` file to my local hard drive and continued with the installation instructions. Here's what I found: "unmount (not eject) USB drive" step is unnecessary. The USB is unmounted from the script `make_bootable_linux.sh` that is called from `make_bootable_linux` the `mtools` package is required for manipulating MSDOS files (not included with Ubuntu 22.04 - `sudo apt install mtools`) certain files located in the download archive need need execute permission - the current `make_bootable_linux` script does not handle this. I commented out lines 86,87,98, and 99 to avoid overwriting and clean up of the /tmp/UNRAID/syslinux directory. Then added execution permissions to `/tmp/UNRAID/syslinux/make_bootable_linux.sh` and `/tmp/UNRAID/syslinux/syslinux_linux`. This step could be handled upstream so these files already have execute permissions when downloading the archive or modifications made to the `make_bootable_linux` script. also of note, the current script(s) do(es) not gracefully handle errors and appears to the user `INFO: the Unraid OS USB Flash drive is now bootable and may be ejected.` even if the above steps are not taken. After performing these steps, `sudo bash ./make_bootable_linux` from the directory I copied this file to, now works as intended. I was able to successfully make the USB bootable and startup Unraid.
  14. good morning, I currently have 3 standard licenses attached to my account. However, I only use 2 of the 3. I would like to know what to do if I want to sell my third license to a third party. Basically, how do I detach it from my account? Thank you in advance for the information. Have a nice day! Cognotte
  15. Does no one use IPv6 inside Unraids docker? Could someone do a ping test from inside a container to a public server and observe if some "address unreachable" occur? I still need to retest with a native portainer install. Somehow I got a feeling that this is an Unraid thing...
  16. Can you please post your Diagnostics, I'm looking currently into a way to ignore the chip.
  17. Was hast du denn alles auf dem alten am laufen? (VM/Docker/Shares/etc.) Dann mach die schnelle NVME zum Cachepool. Achte darauf das wenn du da deine Docker am laufen hast, das du die regelmässig sicherst. Wie willst du das umsetzen? Pack die langsame SSD ins Array. Sinnvoll und Sinnfrei ist immer Ansichtssache😊 Was genau ? (Docker umziehen / Daten verschieben /VMs umziehen) Die aktuelle SSD kommt ja mit in den Server, von daher easy. Wenn du die alte Platte im neuen PC integriert hast und du den USB stick am neuen System startetst, sollte die Kiste direkt laufen. Anschliessend stopst du mal das array und fügst einen Pool mit der NVME hinzu. Anschliessend unter Freigabe bei Appdata auf Cach only stellen (Achtung Appbackups einrichten nicht vergessen, weil du ja Cache only hast.) Deine Backups kannst du dann entweser auf eine Freigabe (z.B. Backup) packen. (die Begrenzung mit 50/50 256GB für Backups, macht für mich kein Sinn und wüsste ich garnicht wie man das umsetzen soll)
  18. Es hat ja auch seine Vorteile. Aber die erkauft man sich eben mit diesen kurzen Spikes. Die Software läuft eben und das benötigt elektrische Energie.
  19. You can do it either way and it doesn't matter much. I'm more comfortable pulling it from the source machine to the destination since it feels a bit more safe to me since you can write a script that is running on server start that starts the backup, shuts down the machine automatically if needed and if this machine is only powered on for a backup. Please don't use the root directory, you always have to use the id_rsa from the /luckybackup/.ssh directory not root because otherwise it will be wiped on a container update. Please post a screenshot from your Docker template. Then you selected the wrong keys. BTW You don't have to create new keys, please use the existing ones which are located in /luckybackup/.ssh since they are individually created for each installation and as said above, please don't use the keys from /root/.ssh
  20. Does not work currently on 6.12.10 because the Kernel is simply too old. You have to wait for a 6.13.x RC release and try there, if it then is still not working then please report it here.
  21. I don't know how they are doing it but it can't be a real WoL package because that simply won't work. No, but if you choose the power outlet method and bind the power outlets to your HomeAssistant instance then it would be of course possible. Than this is your best bet.
  22. dann hol es mal runter, Neustart und schaue dann
  23. Same. First time it happens below byte 22000000000000. So if I put the disk in the array as parity, can this be ok? Maybe it's related only to the plugin's zeroing process? Alternatively if the disk is no good, I'll still will get an error, and then I guess I'll replace it. Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Zeroing: dd output: 21998195441664 bytes (22 TB, 20 TiB) copied, 98221.6 s, 224 MB/s Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Zeroing: dd output: 10490310+0 records in Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Zeroing: dd output: 10490310+0 records out Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Zeroing: dd output: 21999774597120 bytes (22 TB, 20 TiB) copied, 98233.6 s, 224 MB/s Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: dd process hung at 21999776694272, killing ... Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Zeroing: zeroing the disk started 2 of 5 retries... Apr 25 01:57:41 preclear_disk_ZGG46T0A_10918: Continuing disk write on byte 21999774597120 Apr 25 02:13:39 preclear_disk_ZGG46T0A_10918: Zeroing: dd output: Apr 25 02:13:39 preclear_disk_ZGG46T0A_10918: dd process hung at 0, killing ... Apr 25 02:13:39 preclear_disk_ZGG46T0A_10918: Zeroing: zeroing the disk started 3 of 5 retries... Apr 25 02:13:39 preclear_disk_ZGG46T0A_10918: Zeroing: emptying the MBR. Apr 25 08:01:50 preclear_disk_ZGG46T0A_10918: Zeroing: progress - 25% zeroed @ 270 MB/s
  24. I also don't see why the devs don't just make a option to have both ipvaln and macvlan and have them veth/ @ tap tun to teh eth0 or interface of choosing in bridge... I have a few ideas to answer, but the devs would be better able to answer... as its is catered to the end users for point click done. Most of this has to go back to the original decision to move off macvlan due to nic promisic mode and not all nic supported it. Then the github macvlan bug report and later when they tried to remove macvaln and move to ipvlan.... Version 6.9 default didn't have a ipvlan docker driver and when they transitioned broke networking. I will never stop sharing this video and think it would enlighten you on to some of the ins and outs to docker networking.
  25. the problem with unifi and ipvaln is that everything from dockers to services use the same mac address and are not logged correctly in the unifi application. macvlan must be used for unifi...
  26. Ich hab das heute mal versucht, aber offensichtlich muss man da wohl sich an Nextcloud wenden (sorry aber mein English is not the Yellow from the Egg): weiss jemand was er mit dem Post Script meint?
  1. Load more activity