Jump to content

hawihoney

Members
  • Joined

  • Last visited

Everything posted by hawihoney

  1. jdownloader2 includes ffprobe and ffmpeg. I use it from within a python script (use your own container name and input value): [...] jdownloader2_container = "JDownloader2" [...] ffprobe = f"docker exec {jdownloader2_container} /usr/bin/ffprobe" [...] command = f'{ffprobe} -v panic -hide_banner -of default=noprint_wrappers=0 -print_format flat -show_format "{input}" | grep creation_time' r = os.popen(command).read() print(r) [...]
  2. Since a year my Unraid combo crashes weekly. The problem is always the same - a Unraid license USB stick is reset and the server is down immediately. I tried different USB Devices, I tried different ports, I tried nearly all workarounds for syslinux.cfg I found in the internet, without success. The failing USB devices are the Unraid license sticks. If they are not accessible the server (the Unraid VM) is dead immediately. After theses crashes the USB sticks don't show any errors, they can be used again immediately: Jul 9 03:18:40 Tower kernel: usb 1-5: reset high-speed USB device number 2 using xhci_hcd Jul 9 03:18:46 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:19:02 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:19:02 Tower kernel: usb 1-5: reset high-speed USB device number 2 using xhci_hcd Jul 9 03:19:07 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:19:23 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:19:23 Tower kernel: usb 1-5: reset high-speed USB device number 2 using xhci_hcd Jul 9 03:19:29 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:19:44 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:19:44 Tower kernel: usb 1-5: reset high-speed USB device number 2 using xhci_hcd Jul 9 03:19:50 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:20:05 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:20:05 Tower kernel: usb 1-5: USB disconnect, device number 2 Jul 9 03:20:05 Tower kernel: usb 1-5: new high-speed USB device number 8 using xhci_hcd Jul 9 03:20:11 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:20:27 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:20:27 Tower kernel: usb 1-5: new high-speed USB device number 9 using xhci_hcd Jul 9 03:20:32 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:20:48 Tower kernel: usb 1-5: device descriptor read/64, error -110 Jul 9 03:20:48 Tower kernel: usb usb1-port5: attempt power cycle Jul 9 03:20:49 Tower kernel: usb 1-5: new high-speed USB device number 10 using xhci_hcd Jul 9 03:20:54 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:21:09 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:21:09 Tower kernel: usb 1-5: new high-speed USB device number 11 using xhci_hcd Jul 9 03:21:15 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:21:30 Tower kernel: usb 1-5: device descriptor read/8, error -110 Jul 9 03:21:30 Tower kernel: usb usb1-port5: unable to enumerate USB device Now I found a new possible solution. This is my last and final idea before I drop my machine completely. So I'm looking for a way to a.) block xhci_hcd completely and use ehci_hcd on the machine as the only USB driver or b.) a PCI card with a USB2 controller: https://bbs.archlinux.org/viewtopic.php?id=186617 Any help is highly appreciated. The diagnostics is old but the hardware has not changed and the server is down currently. I will need several hours to bring the server up again (lots of dead Unassigned Devices SMB mounts that need minutes each to find out that the SMB server does not respond). tower-diagnostics-20250115-1128.zip
  3. In January 2008 1x Lime Technology MD-1500/LL was shipped to Germany with two 4.7 Pro License Sticks. In 2009 1x MD-1510/LL with two additional Pro License Sticks was added. Count in 17 years.
  4. Ist da tatsächlich etwas bekannt? Hab keine USV - ist wohl an mir vorbei gegangen. Kannst Du mir ggfs. den passenden Thread benennen? Das wäre sehr freundlich. Ich kämpfe seit einiger Zeit mit meinen USB Lizenz Sticks. Nach ein paar Wochen (4-8) kommt immer die selbe Nachricht und der Rechner ist tot. Mittlerweile starte ich den Server (einen Server!) alle 4 Wochen neu, um dem Phänomen zuvor zu kommen. Der Stick ist nach dem Crash sofort in jedem anderen Gerät lesbar, keine Fehler. Ich habe schon andere Sticks, andere Ports, USB3, USB2, Verlängerungskabel dazwischen, einfach alles versucht. Ohne Erfolg. Das ist die Meldung. Passt die eventuell zu dem von Dir genannten USV Thread? May 31 01:28:57 Tower kernel: usb 1-3: reset high-speed USB device number 2 using xhci_hcd May 31 01:29:02 Tower kernel: usb 1-3: device descriptor read/64, error -110 May 31 01:29:18 Tower kernel: usb 1-3: device descriptor read/64, error -110
  5. This is what I experience since many, many years on different hardware - fast boot yes/no makes no difference here. E.g. Supermicro X12SCA-F here with two attached NVMe. Reboot: 1x NVMe, Hard boot: 2x NVMe. I live with it. Never reboot. I even cut power for a minute - just to be sure.
  6. It's the parity check after a server crash and reboot. AFAIK, that one is always correctional - for a reason. This one SHOULD correct sync errors always. All drives are spun-up during that correctional check, so reconstruct write for array writes wouldn't hurt IMHO. I get your point with before is corrected, after will be corrected. And I know the technical ideas behind parity checks (data disk is always correct). But during that parity check writes may use wrong values and now consider a crash during that writes ... IMHO that would make things worst. I vote for recosntruct writes during a correctional parity check.
  7. Stupid question: My server crashed last night. After reboot the usual parity check starts. While parity check is running I can write to the array. So far so good 1. What I see is, that the parity check uses all disks at aprox. 35 MB/s. Fine with that during parallel writes. 2. But for the writes to one data disk I see that only the parity disks and the array disk, I'm writing to, are being used. Isn't that dangerous? The writes seem to consider parity disks are ok. But they are checked currently. So shouldn't writes during a parity check use "Reconstruct write"? Read/Write/Modify is IMHO wrong during a running parity check. In the image below you can see what I mean. 34 MB/s is the parity check. 90 MB/s is the write to disk21 (not shown) and the parity disks.
  8. According to the help text for SMB/NFS, you mount the source name it has within UD. So in that case its //UNRAID_BU/BACKUP.
  9. If you click on the help icon while on the Unassigned Devices tab you will get all possible commands. Let UD mount/unmount. It knows exactly how to mount/unmount on Unraid: E.g.: I use these commands from within User Scripts to mount/unmount an external SMB resource: #!/bin/bash #backgroundOnly=true #clearLog=true /usr/local/sbin/rc.unassigned mount //192.168.178.101/disk1 ### /usr/local/sbin/rc.unassigned umount //192.168.178.101/disk1
  10. Just two small corrections/additions. I think you mean two expanders on the backplane, these are for redundancy, correct. These Supermicro backplanes are called *EL2 instead of *EL1. The two ports (I think you mean that with connections) on a previously mentioned SAS3-846EL1 can be used as input or output. You can even use two ports as input ports from one single HBA. These expanders will handle that. We use SAS2-846EL1 with 3 ports each. You can mix the ports as you like. Had them running with two input ports and one output port to another SC846 on Unraid. You can even output with breakout cables. Limitless possibilities. Cascading is no problem with Unraid. The only thing you need to figure out is how to use more than one backplane (48 disks, 72 disks, ...). Theres only one Array possible currently, so you need to use Unassigned Devices, ZFS pools, etc on top of one Array of a fully occupied SC846 case.
  11. Found a workaround. Deinstall packages fetched via un-get or other tools. Install that one from community store:
  12. Today I did install 7.0 on our Main Unraid Server. Update was a smooth experience with one exception. Our own software infrastructure is completely local and based completely on Python. For Python we need pip and additional modules. This does not work with 7.0 any longer because of SSL errors. This worked for the last couple of Unraid releases without any problems. And as you can see below the extra packages are rather old. Nothing changed with them. Only Unraid 7.0 is new. The list of additional modules: root@Tower:/boot/extra# ls -l total 45496 -rw------- 1 root root 3598592 Jul 4 2023 exiftool-12.64-x86_64-2cf.txz -rw------- 1 root root 1809344 Nov 3 2021 python-pip-21.3.1-x86_64-2.txz -rw------- 1 root root 18538524 Jan 16 2022 python3-3.9.10-x86_64-1.txz -rw------- 1 root root 290256 Mar 17 2022 rar-5.8.1-x86_64-1_SBo.txz -rw------- 1 root root 22115768 Jan 13 15:10 rclone-1.69.0-x86_64-1cf.txz -rw------- 1 root root 212716 May 6 2022 unrar-6.1.7-x86_64-1cf.txz Here the SSL errors we get: root@Tower:/boot/extra# python3 -m pip install --upgrade requests WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/requests/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/requests/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/requests/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/requests/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/requests/ Could not fetch URL https://pypi.org/simple/requests/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/requests/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement requests (from versions: none) ERROR: No matching distribution found for requests WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping Going back to 6.12.* the modules worked again. What is it? Thanks in advance.
  13. It's just a small annoyance and it needs a well defined order to see it: 1.) Start a parity check. Let it run for a minute or so. Cancel the parity check. Clear the current notification. Wait a day or so. 2.) Start a parity check and let it finish. --> The GUI notification shows a red icon even after a successful second run. The GUI notification will take the previously canceled parity check and it's result into account. Looking into the History you will see both lines in the correct order: --> The text shows the correct result: --> At that point the text and the notification don't match. As a said, just a small annoyance. But I always get a heart attack seeing that red icon Thanks.
  14. Thats not what I try to report. I open the Unraid GUI. This usually opens the Main tab. Within the Main tab the Array tab will be opened usually. Now the UD tab opens instead - always.
  15. Mit "Sie erhalten beim Kauf 24 Monate gesetzlichen Gewährleistungsrechte" ... Recertified bei Seagate bedeutet, dass die Platten Rückläufer sind, die von Seagate geprüft wurden und als "Neu" mit Nachlass verkauft werden, da sie einen "neuwertigen Standard" erfüllen. So wurde mir das mal von denen erklärt. Im Laufe der letzten Jahre habe ich exakt 30 Re-Certified Platten von ihm in diverse Server eingebaut. Die sind alle noch im Einsatz. Nicht ein einziger Ausfall bisher. Für den Hobby-Bereich ist das unschlagbar günstig. Für den Profi-Bereich nimmt man sowieso andere Platten. Da diskutiert auch niemand über einen u.U. doppelten Preis.
  16. Today, after installing the latest release of UD, I'm always sent to the UD tab (within the Main tab) instead of the Array tab. I'm using Page view=tabbed in Settings.
  17. Ich kaufe seit Jahren bei dem genau dieses Modell. Günstiger geht fast nicht: https://www.ebay.de/itm/185843432850
  18. Thanks for your fast answer. Hmm, I can reach everything from Google to IBM around the world. login.tailscale.com works but not the website. I jumped - via tailscale - on my Unraid server at home and did a traceroute from DE. Same result from ZA on Android: root@Tower:~# traceroute tailscale.com traceroute to tailscale.com (76.76.21.21), 30 hops max, 60 byte packets 1 fritz.box (192.168.178.1) 1.703 ms 2.283 ms 2.638 ms 2 p3e9bf1dc.dip0.t-ipconnect.de (62.155.241.220) 15.594 ms 15.627 ms 15.760 ms 3 d-ed5-i.D.DE.NET.DTAG.DE (217.5.110.90) 16.892 ms 16.884 ms 17.447 ms 4 d-ed5-i.D.DE.NET.DTAG.DE (217.5.110.90) 17.398 ms 23.944 ms 24.080 ms 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 *^C
  19. Is it just me or is Tailscales website down? I can reach: https://login.tailscale.com/ but none of the static parts like https://tailscale.com/kb/ The result is "ERR_CONNECTION_TIMED_OUT". As I'm not in my own country currently I additionally tried a VPN to different countries but the result is the same. I thought that the installation of Tailscale might be the reason and stopped Tailscale. Without success again. The last time it worked for me is two days ago. Thanks.
  20. @EDACerton: In one of the complaints mentioned above, somebody wrote that MagicDNS would lead to relayed connections. As a network NOOB without a clue, I changed the address of my Unraid server in my Android tablet from the full tailscale domain to its IPv4. Seems a lot faster now. Switching these address from one type to another seems to confirm that. Pure luck or simply impossible?
  21. I don't even know what this is. Installed the plugin on Unraid, the app on Android, and added the full tailscale domain to my file manager on Android. That's all. No additional VPN, no NPM etc, bonding=yes, bridging=no, VLANs=no, macvlan on docker and host access=yes. No ports on router open, no proxys of any type. My tailscale account was still there, so installed it again - it's really fast to setup. Copied one file --> 62 kb/s. Dropped everything, started my Fritzbox VPN, copied one single file in several MB/s. Searching the web I find lots of complaints about tailscale and SMB. What's the magic?
  22. When I tried Tailscale some months ago on my Unraid server, copying files eg from an Android device over SMB was terrible, terrible slow. So i dropped it. Using that same two devices and SMB with other VPN solutions was way faster (MB/s instead of KB/s). Is this fixed now?
  23. I try to avoid that - thats my reason to ask. Three arrays were affected during the power outtage. 1 array was working and does parity check currently. 2 arrays were spun-down completely. Parity check on them takes 35 hours each. I'm pretty sure that a spun-down array is save, but only the developer knows.
  24. If the array is spun down after (in my case) 15 minutes, is it possible that it's content is not synced (cache written to disks and applied to parity)? We had a power outtage yesterday. I know that two DAS (Direct Attached Storage) boxes were spun down completely before this crash. Is it possible that parity is out of sync in that situation? I guess no, must be save. What do you think? Thanks.