Shantarius

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by Shantarius

  1. Hello, since a few days mover logs some errors to the syslog: Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjE5Y2NiODUyZmI5NmQzMmQxMTk1OWE1OTMxZTUxMDEwNjYwZDNjZmE2MjIzZTU1MDk5Mzc1MTFmMzg5YjM0Y2YiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjVmNzNjYWFmMjNlZjY5Y2UxODZmZDk2MzJmNjBhZTBmMjRkNmFiYmRmNTllMTc4MDE4ZTI0NzRjN2M4NTdjOTYiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImMwYzZkMzBmMzE1NjUwNzAzOWE1OWU0ZmViYTZhMDJkZjg1MzI5YmM1MTllM2FmZDlmZjQ2N2QyYjI4NTVjMzYiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImVhZTVjMzUzNDgxZGZlNDNkNTg3ODhkZWVmNGNkYmIwY2YyNTgwMDA2ZWYwY2Q1YTYxOTk4Y2NhYmNiOWRiY2MiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImYwNzdiMzEwYTA2OTEzZTFkOWI4NjJiYjk2MDgwNzU5NDJiZjZlMzViYjY4MzA4MjI3NWM2YWI3MjVhMjBkNmYiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjI2NGRkZGRlMjUyMjJhYTBkZGNkNTQ4NWQ3MDI1NTU1N2NiMDlkYzkxNzQzOTc1NDczMDQ3NjQzNmU1MzkxZDUiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImY4NjA0NjliZjA2YzhhZjA3MmUzNmY2OWQ3OWJhMjUwNjdjZmIyOWU5MGViMzJmMjJlNzhiZWIyNDQwNGMyZTIiLCJ3Ijo2MDAsImgiOjMwMCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjNkNjNkOTYwYTA2M2E0NTc2ZjFiZGNlNTY3MDZmOTQxNmI5MTkyNTk3ZWZjODA2ODM5YTA1YmMzMTkyNmYyYWMiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjRjYWI0ZmNmYmE0OTQ4MWQzNTI3MGRlMDEwYWNkOTUwM2E1NjIyNjA2YzZiYzE1MTk2YzFlOTE1MGI4NjljNTIiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImMxZmI4NDc2MzYwMDFhOWEyYjZjNzI3NTU3ZDc2MzgyOTRjMmRiNGZhZmI3OTFiZjljZjUzYzc4MmQzMmFmZTUiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImQ2OGYyZGVjZTk0ZmIyOTUwZjczODQ0MzFiODNlN2FjZTlhNTYwYzk4Zjc2MGVkNThjYjU2Yzk1MWYyZDIwZTQiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 698CE5D95430B9436F0978533BDA592567988C92C2932B1F451E9415C515418CD9EEE167A9ADDFB367A373FA810E2AB3B330B8C7DF498C563B6F874C9CFDD5F5 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjJmMGUwYjkwNWI3ZjNlNTcyYjJiM2Y0ZTRhMTA4ODM1NmMzOTY3ODQzYzM1OWY5YTA3MjI0ZjY2YjhmYjQ3NjUiLCJ3Ijo2MDAsImgiOjMwMCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjRkOWI5YTBkOGIzNWI1YTY1NmJkMDJmMjhjZGNjZWRmMGRlMGRjN2FkZDIwZmU2ZDNmYjc4YTc0ZDc3YTdlOWMiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjU3YzY2ZWNjZDRjZTljYWU3ZDllMzBiZGI3OTNkOGY2M2M5NWI4NTMzM2NiOTUxZDljODI2ODAwMDkzOGRlNzkiLCJ3IjozMDAsImgiOjE3MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjEyNDljNmQ5OWYwODBmMTFhNjlhMmM4MDQzOWU4YTQ3M2UwNjhlMTg3ZDkyYTBkYWQzZDdjNDNhZTJjODc0ZjAiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjdmNzE5ZWU0NmM0MDZmOWIyMDQ4NjdmZTY5YzE4YmMyZWQ1ODY4ZmVjMGY5MzlhYmRmNGIwOTEyZDgzMTNjODQiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6IjJjZjIyYTdiYjA2ZDc1MGRmYmQ2NDAxMzJjNTQ5NWVlY2EyMmJlNWE2NzY0NDUwNzg1NjEwYjk0NTJjYTZiZGUiLCJ3Ijo2MDAsImgiOjMwMCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: eyJpdSI6ImRmNDNmNDA2ZjljOTNlOGNmNTliODBkNTNiYjlmNDcyOTllNWFhMDAyMzFhZWU5MGM2MWEzNGMwYjllYzRmMWMiLCJ3IjozMDAsImgiOjE1MCwiZCI6MS41LCJjcyI6MCwiZiI6MH0.jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .jpg Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f7472617669732d63692e6f72672f6b6d706d2f6e6f64656d63752d75706c6f616465722e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f7472617669732d63692e6f72672f6176656e6461656c2f61746f6d69632d656d6163732e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f7472617669732d63692e6f72672f6b6d706d2f6e6f64656d63752d75706c6f616465722e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f7472617669732d63692e6f72672f7468656d6164696e76656e746f722f657370746f6f6c2e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f636f6465636f762e696f2f6769746875622f657370383236362f41726475696e6f2f636f7665726167652e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: 68747470733a2f2f636f6465636f762e696f2f6769746875622f657370383236362f41726475696e6f2f636f7665726167652e7376673f6272616e63683d6d6173746572 Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .dms Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: UnkPcAIhUm5TawMwACFSP1diB2VRYAU1V2cAaFRvCTQJa1YzUikBeQBgC3RXOlBeAHdVdQxmUGJWJQcSXjMAclM_VXgKI1h6DmcCYVIpD2FSOg9rAmBSNFM-A3kAMlJYVy4undefined.gif Dec 28 04:00:34 Avalon move: error: move, 392: No such file or directory (2): lstat: .gif Can anyone please help me to solve this problem? Thank you! Chris
  2. Hello, in my syslog I have >15000 Entries like this: Oct 17 05:52:51 Avalon bunker: error: BLAKE3 hash key mismatch, /mnt/disk1/Backup_zpool/zpool/nextcloud_data/20231007_031906/nextcloud_data/christian/files/Obsidian_Vault/.git/objects/ba/edc24a7db2d2f8cc372d12a6cfaa5a7b3b9206 is corrupted At 10th Oktober the same for disk3. What is happened here and how can I solve this? Thanks!
  3. Works this only with Unpaid >=6.12?
  4. Hello, what is with this spinning symbol meant?
  5. I think the Problem was that the Unraid Samba Service and the Timemachine Samba Service had the same workgroup name. I have changed the workgroup name for the Container and now it is all ok.
  6. Bei Hetzner bekommt man 1TB Cloud Speicher für 3,81€ Da kannst Du Deine Daten per WEBDAV, SFTP, SSH usw hochladen. Je nach verwendeter Software auch verschlüsselt. Hetzner hostet in Deutschland und Finnland, kannst Du Dir aussuchen. Also in Deutschland. Ich nutze dann für das Backup meiner Daten den Duplicati Docker. Damit kannst Du die Daten auch vollverschlüsselt hochladen. Duplicati gibt es auch für Windows, MacOS und separat für Linux. Das heisst im Notfall kannst Du die hochgeladenen Daten mit jedem OS und dem entsprechenden Duplicati Binary wieder herunterladen. Dazu hast Du mit Duplicati eine ordentliche (Web)Gui und musst nicht mit Scripten rummachen. 7TB Daten in der Cloud zu sichern halte ich aber nicht für sinnvoll. Ich sichere in der Cloud nur das allerwichtigste, wie mein DMS und andere Dokumente. Andere Nutzdaten landen auf zwei externen Festplatten die regelmäßig rotiert und im Keller in einer wasserdichten und feuerfesten Box gelagert werden.
  7. Hi, the Timemachine docker spams into my unraid syslog. How can I prevent for this? Which log level can I use for the container settings (now it is set to 1)? Feb 25 16:05:49 Avalon nmbd[64984]: ***** Feb 25 16:05:49 Avalon nmbd[64984]: Feb 25 16:05:49 Avalon nmbd[64984]: Samba name server AVALON is now a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:05:49 Avalon nmbd[64984]: Feb 25 16:05:49 Avalon nmbd[64984]: ***** Feb 25 16:17:27 Avalon nmbd[64984]: [2023/02/25 16:17:27.047518, 0] ../../source3/nmbd/nmbd_incomingdgrams.c:302(process_local_master_announce) Feb 25 16:17:27 Avalon nmbd[64984]: process_local_master_announce: Server TIMEMACHINE at IP 192.168.2.139 is announcing itself as a local master browser for workgroup WORKGROUP and we think we are master. Forcing election. Feb 25 16:17:27 Avalon nmbd[64984]: [2023/02/25 16:17:27.047676, 0] ../../source3/nmbd/nmbd_become_lmb.c:150(unbecome_local_master_success) Feb 25 16:17:27 Avalon nmbd[64984]: ***** Feb 25 16:17:27 Avalon nmbd[64984]: Feb 25 16:17:27 Avalon nmbd[64984]: Samba name server AVALON has stopped being a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:17:27 Avalon nmbd[64984]: Feb 25 16:17:27 Avalon nmbd[64984]: ***** Feb 25 16:17:47 Avalon nmbd[64984]: [2023/02/25 16:17:47.344230, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Feb 25 16:17:47 Avalon nmbd[64984]: ***** Feb 25 16:17:47 Avalon nmbd[64984]: Feb 25 16:17:47 Avalon nmbd[64984]: Samba name server AVALON is now a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:17:47 Avalon nmbd[64984]: Feb 25 16:17:47 Avalon nmbd[64984]: ***** Feb 25 16:29:28 Avalon nmbd[64984]: [2023/02/25 16:29:28.202819, 0] ../../source3/nmbd/nmbd_incomingdgrams.c:302(process_local_master_announce) Feb 25 16:29:28 Avalon nmbd[64984]: process_local_master_announce: Server TIMEMACHINE at IP 192.168.2.139 is announcing itself as a local master browser for workgroup WORKGROUP and we think we are master. Forcing election. Feb 25 16:29:28 Avalon nmbd[64984]: [2023/02/25 16:29:28.202973, 0] ../../source3/nmbd/nmbd_become_lmb.c:150(unbecome_local_master_success) Feb 25 16:29:28 Avalon nmbd[64984]: ***** Feb 25 16:29:28 Avalon nmbd[64984]: Feb 25 16:29:28 Avalon nmbd[64984]: Samba name server AVALON has stopped being a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:29:28 Avalon nmbd[64984]: Feb 25 16:29:28 Avalon nmbd[64984]: ***** Feb 25 16:29:48 Avalon nmbd[64984]: [2023/02/25 16:29:48.037595, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Feb 25 16:29:48 Avalon nmbd[64984]: ***** Feb 25 16:29:48 Avalon nmbd[64984]: Feb 25 16:29:48 Avalon nmbd[64984]: Samba name server AVALON is now a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:29:48 Avalon nmbd[64984]: Feb 25 16:29:48 Avalon nmbd[64984]: ***** Feb 25 16:41:30 Avalon nmbd[64984]: [2023/02/25 16:41:30.678485, 0] ../../source3/nmbd/nmbd_incomingdgrams.c:302(process_local_master_announce) Feb 25 16:41:30 Avalon nmbd[64984]: process_local_master_announce: Server TIMEMACHINE at IP 192.168.2.139 is announcing itself as a local master browser for workgroup WORKGROUP and we think we are master. Forcing election. Feb 25 16:41:30 Avalon nmbd[64984]: [2023/02/25 16:41:30.678645, 0] ../../source3/nmbd/nmbd_become_lmb.c:150(unbecome_local_master_success) Feb 25 16:41:30 Avalon nmbd[64984]: ***** Feb 25 16:41:30 Avalon nmbd[64984]: Feb 25 16:41:30 Avalon nmbd[64984]: Samba name server AVALON has stopped being a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:41:30 Avalon nmbd[64984]: Feb 25 16:41:30 Avalon nmbd[64984]: ***** Feb 25 16:41:48 Avalon nmbd[64984]: [2023/02/25 16:41:48.997249, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Feb 25 16:41:48 Avalon nmbd[64984]: ***** Feb 25 16:41:48 Avalon nmbd[64984]: Feb 25 16:41:48 Avalon nmbd[64984]: Samba name server AVALON is now a local master browser for workgroup WORKGROUP on subnet 192.168.2.126 Feb 25 16:41:48 Avalon nmbd[64984]: Feb 25 16:41:48 Avalon nmbd[64984]: ***** Thank you!
  8. Question about LanCache Prefill I have deselected one Game in SteamPrefill select-apps. Does Lancache Profil deletes automatically the cached Data for this game? What happened if the Drive where the Lancache Cache save the Data is full? Thank You!
  9. Dazu benötigt man eine Bitcoin Wallet und die muss dann im Miner konfiguriert werden, oder? Oder wo kommen die Bitcoins dann hin? Kenne mich damit null aus 😱
  10. Wow, cool. Thank you! When will the next release coming out? Can you explain what the php warning means? 🙂
  11. Hi, since a few days UA Preclear produces php warnings in the syslog. My Unraid Version is 6.9.2 and the UA Preclear Plugin is the newest version. Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 What mean this php warnings? Thank you! Christian
  12. Hi, ich habe mir eine Eaton Ellipse Pro 650 gekauft. Gemessen ohne Verbraucher im eingeschalteten Zustand mit einem Shelly mit Tasmota Firmware. Der Verbrauch schwankt stark, aber im Schnitt Verbraucht das Gerät 14Watt, in 24h insgesamt 330Watt. Das wären bei 30cent/kWh im Jahr 36Euro Leerlaufkosten für die Eaton. Grüße Chris
  13. Yes i use the myservers plugin. Is this error related to this plugin?
  14. Hi, since a few days i have per day one segfault error in libvirt.so: Oct 9 01:16:16 Avalon kernel: unraid-api[31635]: segfault at 30 ip 00001463a22bb059 sp 000014639bd1de10 error 4 in libvirt.so.0.6005.0[1463a2248000+1f6000] Oct 8 22:47:50 Avalon kernel: unraid-api[46409]: segfault at 30 ip 000014f389aa8059 sp 000014f3732cde10 error 4 in libvirt.so.0.6005.0[14f389a35000+1f6000] Oct 7 12:25:50 Avalon kernel: unraid-api[61384]: segfault at 30 ip 0000152d07c16059 sp 0000152d05523e10 error 4 in libvirt.so.0.6005.0[152d07ba3000+1f6000] Oct 6 05:27:40 Avalon kernel: unraid-api[25216]: segfault at 14eb14023e ip 000014eb2e2bb061 sp 000014eb1fb1ce10 error 4 in libvirt.so.0.6005.0[14eb2e248000+1f6000] Oct 5 09:04:43 Avalon kernel: unraid-api[21661]: segfault at 30 ip 000014e961a94059 sp 000014e95b2b9e10 error 4 in libvirt.so.0.6005.0[14e961a21000+1f6000] Oct 4 22:13:27 Avalon kernel: unraid-api[55478]: segfault at 60 ip 0000148e26601059 sp 0000148e1fbfce10 error 4 in libvirt.so.0.6005.0[148e2658e000+1f6000] Actually i use Unraid version 6.9.2. Has anyone a idea what this error mean? Than You!
  15. Hi, after a looooong time without errors today i found this error (only one message) in the syslog: Jul 10 07:56:18 Avalon kernel: unraid-api[45682]: segfault at 371fed7016 ip 000014b5cc11a73c sp 000014b5b60d2e58 error 4 in libgobject-2.0.so.0.6600.2[14b5cc0ef000+33000] Can anyone say what is it and if is it critical? I use Unraid Version 6.9.2 2021-04-07 Thank You!
  16. No, 6.9.2 Is this the reason?
  17. Hi, i cannot find the LXC Plugin in the CA App. 😞
  18. Shantarius

    JitsiMeet

    Kannst Du die Ubuntu VM nicht nach Unraid migrieren?
  19. Servus, habt Ihr es geschafft Unraid-Docker Container mit dem Checkmk-Raw-Docker bzw dem checkmk-Agent zu überwachen wie hier beschrieben? Ich habe zwar unter /usr/lib/check_mk_agent/plugins das Script mk_docker.py installiert, aber ich kann das nicht mit chmod +x ausführbar machen und in checkmk wird mir für den Unraid-Host keine Docker Überwachung angezeigt. Python2 und Python3 (3.9) sind über die NerdTools installiert. Kann mir da jemand den nötigen Anstoss geben damit das funktioniert? Danke & Gruß
  20. Hi mgutt, thank you for your answer. This workes for me! source_paths=( "/mnt/zpool/Docker" "/mnt/zpool/VMs" "/mnt/zpool/nextcloud_data" ) backup_path="/mnt/user/Backup_zpool" ... rsync -av --exclude-from="/boot/config/scripts/exclude.list" --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}" excludes.list Debian11_105/ Ubuntu21_102/ Win10_106/
  21. Hi mgutt, i have changed the rsync command in the script (V0.6) to rsync -av --stats --exclude-from="/boot/config/scripts/exclude.list" --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}" The file exclude.list contains this: /mnt/zpool/VMs/Debian11_105/ /mnt/zpool/VMs/Ubuntu21_102/ /mnt/zpool/VMs/Win10_106/ /mnt/zpool/Docker/Dockerimage/ But the script don't ignore the directorys in the exclude.list Can you say why? Thanks!
  22. Hi mgutt, thank you for the superb script. I have a feature request: Is it possible to have an option for excluding files or paths? Best regards Chris
  23. Wow this is great and exactly what i need. Now i can use a Debian VM use as a (AirPrint)Printserver. If the Printer is off, the printjob is waiting in the cups-scheduler until i have turned on the printer and the printer auto connects to the VM 🙂 Very nice!
  24. Hi, i passthrough a USB Printer to a Debian VM with this Plugin. I have checked the Box in the VM Setting to passtrhough the printer and the printer is available in the VM: root@debian11-103:~# lsusb Bus 001 Device 004: ID 03f0:132a HP, Inc HP LaserJet 200 color M251n Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub root@debian11-103:~# If i turn off the printer, the printer i not more available in the VM. If then i turn on the printer, the printer does not connect automatically to the VM. I must manually add the printer to the VM in the VM-Tab. How can i passthrough automatically to the VM after turning on the Printer? Thx Chris
  25. Hi, thank you for your answer! I have removed the LSI controler and use now the SSDs with brandnew SATA cables with the Mainboards SATA II Controler. After i have removed the LSI controler i have changed a old SSD with a brandnew EVO870 with which the errors occur again. Nevertheless with the new SSD the errors coming up after a reboot, but over night running with dockers and vms there was no errors. Today i have made an experiment after i have turned off NCQ to show the status: 1. Before reboot root@Avalon:/mnt/zpool/Docker/Telegraf# zpool status pool: zpool state: ONLINE scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 074 062 000 Old age Always Never 26 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 347926153 2. After reboot, before starting the array After reboot, before starting Array: root@Avalon:~# zpool status pool: zpool state: ONLINE scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors root@Avalon:~# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 078 062 000 Old age Always Never 22 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 347946967 3. After reboot and after a few minutes after the Array has started root@Avalon:/mnt/zpool# zpool status pool: zpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 19 0 errors: No known data errors root@Avalon:/mnt/zpool# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 078 062 000 Old age Always Never 22 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 348067881 root@Avalon:/mnt/zpool# cat /var/log/syslog | grep 16:16:37 Nov 8 16:16:37 Avalon kernel: ata5.00: exception Emask 0x0 SAct 0x1e018 SErr 0x0 action 0x6 frozen Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/04:18:06:6a:00/00:00:0c:00:00/40 tag 3 ncq dma 2048 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/36:20:57:6a:00/00:00:0c:00:00/40 tag 4 ncq dma 27648 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:68:10:26:70/00:00:74:00:00/40 tag 13 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:70:10:24:70/00:00:74:00:00/40 tag 14 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:78:10:0a:00/00:00:00:00:00/40 tag 15 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: SEND FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 64/01:80:00:00:00/00:00:00:00:00/a0 tag 16 ncq dma 512 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5: hard resetting link Nov 8 16:16:37 Avalon kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 8 16:16:37 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 8 16:16:37 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 8 16:16:37 Avalon kernel: ata5.00: configured for UDMA/133 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 Sense Key : 0x5 [current] Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 ASC=0x21 ASCQ=0x4 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 CDB: opcode=0x2a 2a 00 0c 00 6a 06 00 00 04 00 Nov 8 16:16:37 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 201353734 op 0x1:(WRITE) flags 0x700 phys_seg 2 prio class 0 Nov 8 16:16:37 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=103092063232 size=2048 flags=40080c80 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 Sense Key : 0x5 [current] Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 ASC=0x21 ASCQ=0x4 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 CDB: opcode=0x2a 2a 00 0c 00 6a 57 00 00 36 00 Nov 8 16:16:37 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 201353815 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 8 16:16:37 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=103092104704 size=27648 flags=180880 Nov 8 16:16:37 Avalon kernel: ata5: EH complete Nov 8 16:16:37 Avalon kernel: ata5.00: Enabling discard_zeroes_data root@Avalon:/mnt/zpool# 4. After starting the docker service and the vm service (dockers and vms are on the zpool) no new errors. 5. After zpool scrub no new errors in syslog and zpool status root@Avalon:/mnt/zpool# zpool status -v zpool pool: zpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 00:05:29 with 0 errors on Mon Nov 8 16:34:53 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 19 0 errors: No known data errors root@Avalon:/mnt/zpool# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 073 062 000 Old age Always Never 27 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 349169583 6. After zpool clear root@Avalon:/mnt/zpool# zpool status -v zpool pool: zpool state: ONLINE scan: scrub repaired 0B in 00:05:14 with 0 errors on Mon Nov 8 16:42:34 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors root@Avalon:/mnt/zpool# What is meant with "ata5: hard resetting link" in the syslog? Is it possible that the "vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5" Partition is damaged? Can i set the 870EVO offline, delete the Partitions at this SSD and replace/add it back to the mirror?