vw-kombi

Members
  • Posts

    436
  • Joined

  • Last visited

Everything posted by vw-kombi

  1. I have been following ll this and ready to jum,p on board also. This comment may have answered my question however - I am currently on linuxserver/unifi-controller:7.3.83 and this new container only provides for 7.5.187-unraid and 8.0.7-unraid. I had assumed I have to change my current controller to one of these as the backup and restore would be a large jumpt to a newer release. So is it OK therefore to JUMP to the 7.5.187-unraid release ? with my 'older' hardware - namely : USG3 US-8-60W USW-Flex-Mini UAP-AC-Lite UAP-AC-LR EDIT - Never mind, I did a snapshot and took the plunge on the 'upgrade' on the linuxserver.io and changed the tag to linuxserver/unifi-controller:7.5.187-ls216. All my stuff seesm fine - so far only noticed my 'LAN' has changed to 'Default' everywhere - but all is working well. I can now do a direct backup / restore to your new container @peteasking - thanks heaps on all this effort....... or should I go the 8.0.7 route....... while feeling reckless!
  2. Turns out I have a disk read error! users shares not there - so it could not CD unto /mnt/user........ bnest I fix that and retry!
  3. I am having trouble with a python script. Error is : /usr/local/emhttp/plugins/user.scripts/startScript.sh: line 6: /tmp/user.scripts/tmpScripts/Sync-Emby-Play-States/script: cannot execute: required file not found The script is : #!/bin/python3 python3 /mnt/user/appdata/syncseen/syncseen.py -shost="http://192.168.1.200:8096" -skey="xxxxxxxxxxxxxxxxxxx" -dhost="http://192.168.1.6:8096" -dkey="xxxxxxxxxxxxxxxxxxxxx" Python 3 is installed with nerdtools, and which python3 shows ok : /usr/bin/python3 The .py file being called is there, and executable : I have been going around and around for a few hours on this! My first time with python! Please help - has to be something stupid. I tried a cd bvefore the python3 comand also - no difference.
  4. There is not really much to it - but as I said, I doubt mine would work due to not having a capability to rewrite the DNS to my ip address - as my router cant do it, and I dont have adguard or pihole on the network running - i may try again but I can elaborate on the instructions a bit for you for unraid - you saw the instructiuon here - https://github.com/linuxserver/docker-mods/tree/swag-dashboard 1 - set an environment variable DOCKER_MODS=linuxserver/mods:swag-dashboard. Edit the container in unraid, and add the linuxserver/mods:swag-dashboard to the list of DOCKER_MODS. I use a number of docker mods already for other things, so I get to add it with | to seperate them. Mine looks like this - note I added mine ages ago for other config things I wanted. Maybe watch the swag youtube tutorial from IBRACORP for detailes setup of this : This is the line : And mine looks like this as I use cloudflare, maxmind and crowdsec also - note the extra bit on the end for this swag dashboard after the extra | : linuxserver/mods:universal-docker|linuxserver/mods:swag-cloudflare-real-ip|linuxserver/mods:swag-maxmind|ghcr.io/linuxserver/mods:swag-crowdsec|linuxserver/mods:swag-dashboard 2 - Add a mapping of 81:81 to swag's docker run command or compose This means to edit the swag docker container in unraid, scroll to bottom, click add another path, port, valriable, lable or device, Then change config type to port, then set the name and the host port to 81 : so you then have this at the end of the config : Thats it for the docker container edits to activate all this - the container will restart (or do it manually), then check the logs for it, and you will see loading the mod : The last bit is the DNS re-write - which is done so that when in your browser you enter dashboard.yourdomain.com, it goes to the IP address of your swag container (in my case as I have an IP address for my container for firewalling etc etc etc. This is the bit I have not (cannot) do so I cant actually test any of this. Hope to get that last bit done at some stage. Edit - I have however tested locally with 192.168.1.5:81 and this brings up the dashboard. .5 is my ip address of the swag container. As I dont expect to open this out to the world, this is good enough for me :
  5. I did understand all that - but I still could not get it to work - likely as I have no way of doing the dns rewrite on my systems here.
  6. Thats a very detailed writup. Can I assume that the restore from 7.3.83 to 7.5.187 for the new container was all good. And also, if they are not planning on putting old tag's up - what about the people that may have grey area equipment that may/may not be future supported ? I personally have these and I think the UAP's may be suspect: USG-3P US-8-60W UAP-AC-Lite UAP-AC-LR
  7. I have noticed a number of these in the logs - clumps of them, maybe 15 at a time, then a pause for a while, then another 15 or so. The system is doing the rsyncs again like last time - do crash as yet and everything else is working...... Two clumps pasted below : Oct 27 15:38:22 Tower kernel: traps: lsof[15390] general protection fault ip:14fe0f630c6e sp:52fe2ead5c941bbe error:0 in libc-2.37.so[14fe0f618000+169000] Oct 27 15:39:02 Tower kernel: traps: lsof[23356] general protection fault ip:14793a8c7c6e sp:f5252e2edffbeb3f error:0 in libc-2.37.so[14793a8af000+169000] Oct 27 15:39:26 Tower kernel: traps: lsof[24873] general protection fault ip:1538ac7aec6e sp:bca271f778cc9d8d error:0 in libc-2.37.so[1538ac796000+169000] Oct 27 15:39:51 Tower kernel: traps: lsof[26380] general protection fault ip:1533ff6b9c6e sp:25b8e8e78600cbcf error:0 in libc-2.37.so[1533ff6a1000+169000] Oct 27 15:41:21 Tower kernel: traps: lsof[29359] general protection fault ip:14abcc667c6e sp:f7f04ef0471efaa9 error:0 in libc-2.37.so[14abcc64f000+169000] Oct 27 15:41:47 Tower kernel: traps: lsof[1665] general protection fault ip:14f4093fac6e sp:198ee115554f2232 error:0 in libc-2.37.so[14f4093e2000+169000] Oct 27 15:42:30 Tower kernel: traps: lsof[4662] general protection fault ip:149b32377c6e sp:ceeff2fcc35dcf2b error:0 in libc-2.37.so[149b3235f000+169000] Oct 27 15:44:03 Tower kernel: traps: lsof[6140] general protection fault ip:1487ea7f3c6e sp:6f1b39db114c9da7 error:0 in libc-2.37.so[1487ea7db000+169000] Oct 27 15:44:47 Tower kernel: traps: lsof[12942] general protection fault ip:147926118c6e sp:e566344652cff3fb error:0 in libc-2.37.so[147926100000+169000] Oct 27 15:45:31 Tower kernel: traps: lsof[15936] general protection fault ip:14e06c08ec6e sp:4c75f0471c4c9fea error:0 in libc-2.37.so[14e06c076000+169000] Oct 27 15:46:14 Tower kernel: traps: lsof[19200] general protection fault ip:1527ca5c8c6e sp:ef1465dcb4de1f4d error:0 in libc-2.37.so[1527ca5b0000+169000] Oct 27 15:46:57 Tower kernel: traps: lsof[20938] general protection fault ip:145ea32eec6e sp:653f41d81623efa9 error:0 in libc-2.37.so[145ea32d6000+169000] Oct 27 15:47:49 Tower kernel: traps: lsof[24810] general protection fault ip:15242fdc1c6e sp:56c13a7d53eaac43 error:0 in libc-2.37.so[15242fda9000+169000] Oct 27 15:50:18 Tower kernel: traps: lsof[30473] general protection fault ip:1461fc9f4c6e sp:c117f4c551cb7e error:0 in libc-2.37.so[1461fc9dc000+169000] Oct 27 15:51:16 Tower kernel: traps: lsof[5573] general protection fault ip:147919e84c6e sp:6e339ab5bdab03ee error:0 in libc-2.37.so[147919e6c000+169000] Oct 27 15:52:00 Tower kernel: traps: lsof[9219] general protection fault ip:14e487a9ac6e sp:a8059bae2d51587e error:0 in libc-2.37.so[14e487a82000+169000] Oct 27 15:52:22 Tower kernel: traps: lsof[10727] general protection fault ip:148321e37c6e sp:f1c94dc8e5e22aec error:0 in libc-2.37.so[148321e1f000+169000] Oct 27 15:54:54 Tower kernel: traps: lsof[16110] general protection fault ip:151006b04c6e sp:d6f7622d4e993dd6 error:0 in libc-2.37.so[151006aec000+169000] Oct 27 15:55:20 Tower kernel: traps: lsof[26284] general protection fault ip:154840c28c6e sp:41da5dd875d1ac6b error:0 in libc-2.37.so[154840c10000+169000] Oct 27 15:56:16 Tower kernel: traps: lsof[31571] general protection fault ip:1523cf1f2c6e sp:73d676b0c1778b3a error:0 in libc-2.37.so[1523cf1da000+169000] Oct 27 15:56:44 Tower kernel: traps: lsof[684] general protection fault ip:1551a3bcdc6e sp:b31c58c4c867d216 error:0 in libc-2.37.so[1551a3bb5000+169000] Oct 27 15:57:23 Tower kernel: traps: lsof[3278] general protection fault ip:150e1cb3fc6e sp:cbef30c4acfc0e22 error:0 in libc-2.37.so[150e1cb27000+169000] Oct 27 15:58:03 Tower kernel: traps: lsof[6391] general protection fault ip:14d81fd07c6e sp:cdfe0d5979c341e error:0 in libc-2.37.so[14d81fcef000+169000] Oct 27 15:58:55 Tower kernel: traps: lsof[7627] general protection fault ip:150a65202c6e sp:eec9137dd2afd675 error:0 in libc-2.37.so[150a651ea000+169000] Oct 27 16:00:08 Tower kernel: traps: lsof[14720] general protection fault ip:14e3e188fc6e sp:aaa8c4d759954e6c error:0 in libc-2.37.so[14e3e1877000+169000] Oct 27 16:00:29 Tower kernel: traps: lsof[16202] general protection fault ip:154365fd5c6e sp:82d218140209ca0c error:0 in libc-2.37.so[154365fbd000+169000] Oct 27 16:01:11 Tower kernel: traps: lsof[19168] general protection fault ip:149b305c8c6e sp:a3243d3932911df5 error:0 in libc-2.37.so[149b305b0000+169000] Oct 27 16:01:59 Tower kernel: traps: lsof[22106] general protection fault ip:14c7ec4b8c6e sp:1e2bcc8c545cb6ee error:0 in libc-2.37.so[14c7ec4a0000+169000] Oct 27 16:02:21 Tower kernel: traps: lsof[23942] general protection fault ip:150dc59cac6e sp:4d6b5566f2b44454 error:0 in libc-2.37.so[150dc59b2000+169000] Oct 27 16:02:42 Tower kernel: traps: lsof[25290] general protection fault ip:14a4fcee6c6e sp:42d43831cf3dd72e error:0 in libc-2.37.so[14a4fcece000+169000] Oct 27 16:03:43 Tower kernel: traps: lsof[29210] general protection fault ip:14e6f4f1bc6e sp:cb03235b2e9f561f error:0 in libc-2.37.so[14e6f4f03000+169000] Oct 27 16:04:27 Tower kernel: traps: lsof[30636] general protection fault ip:15206266cc6e sp:61e7e2c2c16bfc70 error:0 in libc-2.37.so[152062654000+169000] Oct 27 16:05:12 Tower kernel: traps: lsof[871] general protection fault ip:149c54310c6e sp:1dcfe5510bef5013 error:0 in libc-2.37.so[149c542f8000+169000] Oct 27 16:05:47 Tower kernel: traps: lsof[3923] general protection fault ip:1460434cbc6e sp:cb12b9539397f9b7 error:0 in libc-2.37.so[1460434b3000+169000] Oct 27 16:06:33 Tower kernel: traps: lsof[6060] general protection fault ip:14ffb0f98c6e sp:d44c7a69c480ab65 error:0 in libc-2.37.so[14ffb0f80000+169000] Oct 27 16:07:45 Tower kernel: traps: lsof[12365] general protection fault ip:14eaa5870c6e sp:50be47950c82368f error:0 in libc-2.37.so[14eaa5858000+169000] Oct 27 16:08:28 Tower kernel: traps: lsof[15064] general protection fault ip:14744f306c6e sp:65dd5665b35b3368 error:0 in libc-2.37.so[14744f2ee000+169000] Oct 27 16:11:22 Tower kernel: traps: lsof[19207] general protection fault ip:153ea838cc6e sp:cef419e641ed3b2e error:0 in libc-2.37.so[153ea8374000+169000] Oct 27 16:12:25 Tower kernel: traps: lsof[28525] general protection fault ip:1522e4afcc6e sp:b93f3427996293e6 error:0 in libc-2.37.so[1522e4ae4000+169000] Oct 27 16:13:07 Tower kernel: traps: lsof[1165] general protection fault ip:14eb36bd9c6e sp:eff374249a4532fd error:0 in libc-2.37.so[14eb36bc1000+169000] Oct 27 16:13:47 Tower kernel: traps: lsof[3665] general protection fault ip:15187a5a5c6e sp:64299eb53771987b error:0 in libc-2.37.so[15187a58d000+169000] Oct 27 16:14:17 Tower kernel: traps: lsof[5715] general protection fault ip:151a316f8c6e sp:8b6a34a0f6e88cbd error:0 in libc-2.37.so[151a316e0000+169000] Oct 27 16:18:50 Tower kernel: traps: lsof[11116] general protection fault ip:14840c10ec6e sp:db0348f1c73f4c55 error:0 in libc-2.37.so[14840c0f6000+169000] Oct 27 16:19:45 Tower kernel: traps: lsof[27782] general protection fault ip:148028526c6e sp:ff2d0010758e8ed9 error:0 in libc-2.37.so[14802850e000+169000] Oct 27 16:20:10 Tower kernel: traps: lsof[29404] general protection fault ip:1530c465ec6e sp:4eda0597bf3b2500 error:0 in libc-2.37.so[1530c4646000+169000] Oct 27 16:20:34 Tower kernel: traps: lsof[31058] general protection fault ip:1542b008fc6e sp:64c840ee150f5cda error:0 in libc-2.37.so[1542b0077000+169000] Oct 27 16:22:46 Tower kernel: traps: lsof[4364] general protection fault ip:1524fcee4c6e sp:d81873a74a5a896c error:0 in libc-2.37.so[1524fcecc000+169000] Oct 27 16:23:32 Tower kernel: traps: lsof[11036] general protection fault ip:14a9becf9c6e sp:6c7033dcbc057aa error:0 in libc-2.37.so[14a9bece1000+169000]
  8. I have been on unraid for almost 10 years I think and its been pretty bullet proof. I upgrade three months or so after each major release for it to be bedded in. I read about all the scares in 6.12.x online - but a week ago, I bit the bullet as .4 had the macvlan fixes. I followed the instructions for the macvlan settings needed as I use unifi. And today, while doing my usual offline backups (rsync to an unassigned disk connected to sata, and rsync to a backup unraid server onsite), I have my first ever kernel panic. I have now turned on mirror syslog to flash - is that the only thing I need to do to have the logs saved ? Is that bad to do due to flash wearing out over time ?
  9. I just got bit by this - as planning on replacing the ssd cache with an NVME. I calculated it would take days to move the 195GB. I even stopped the mover and deleted a shed load of containers I was keeping around - got the size to 125GB. After another 3 hours and it not even doing 10GB, I cancelled it. The dynamix file explorer is now in use doing the same thing. This is a great 'hack' that I wished I knew about yesterday. Just an update - I was waiting 6 hours for a mover that would have taken days to copy to the array as part of spaceinvadersw zfs cache drive conversion process, and instead with the new 6.12 file explorer, I have done the move to array disk in 10 mintes, then re-created as zfs (1 minute), then the move back to the new cache drive in another 10 minutes. I wasted half a day with the mover pain. I understand its all the small files but still - what is being done under the convers if the file explorer can do it in a fraction of the time, then something should be done to improve this.
  10. Spoke to soon - while the benchmark looked like it was fast again - the unbalabce trailed off the next to nothing again. I will move my spare HBA card into place - once I find it in the garage!!!!!
  11. Solved somehow my moving the 2 port sata card from one PCIe to another slot......
  12. So I ran the DiskSpeed container and let it run. It seems like the parity disk is the culprit here - not the new disk (or the parity disk speed was affected by the adding of the new disk) The parity speed was only 1.1MB to 5.5MB max. No errors in the unraid logs again. I will swap some cables around and see what happens.
  13. This is my backup unraid server. It was populated with rsycn ove lan and ran nicely at 100-110 so all disk IO was good then. I just added an 8TB drive - ram a pre clear first. no errors. Just added it to array and formatted. trying to move data to it from other disks (a long process as a total of 15TB will be needed). But the speeds it is dong - 0.6MB/s !!!! No errors in the logs, no smart errors, everything seems to be operating normally. Where can I look to fault find this ?
  14. Never mind - its working now...... Has to be at the right place - cant just tick it any old time!
  15. Just put unbalance on my new backup unraid server - I cant until the dry run - nothing happens. Tries on my live server and it is not tickable there either. Whats going on ?
  16. I followed spaceinvaderone's amazingly helpful vid on converting the cache drive to zfs. I have done this on by backup upraid server initially while documenting and plannin for my main server. It got me wondering about the appdata backup process - as these are datasets now, does that affect the appdata backup plugin ? does it still work ? can it be used for restores etc etc ? I know I can take snapshots now, but my question still stands. Thanks.
  17. As with many things, when something breaks you learn a newer, better way of doing things. I have now added a virtio share in the VM xml, and that is mounted at start on the VM, and I have rsync command running in cron on the VM. So instead of unraid using an nfs to 'pull' the data, I now have it 'pushed' from the VM instead. A better way of doing this I think. A new power monitor is coming to solve the UPS comms to home assistant.
  18. Spent too much time on these issues and no one has anything to help on this issue. I will just buy another wifi power monitor plug and connected the UPS into that. $15 bucks to sort this. For the VM running Shinobi CCTV - I will just increase its disk size and increase the stored videos to match what was being pulled to the unraid share.
  19. Further to this, I noticed in home assistant, the APC UPS power usage is not there. Logs saif timeout error. I checked the history and this also started when I update unraid to 6.12.4. Something in the network config has broken VM's talking to the unraid host. I am not going to roll back just for this, but I would like some help if anyone knows aht in the network config caused this. The changes for the macvlan etc were noted in my first post above. These devices are all on the same lan so not a FW issue. Is it something to do with the required changes in the intructions to set enable bridging to no ?
  20. I just bought a Crucual P3 2TB NVME. This will later replace my SSD cache disk but I through I would add it to unassigned devices first and so some writes to it to check it is all working ok. It is shown on array start, so I clicked format. BTRFS. All said the usual things but it is not mountabel, and no mount button. It has the circle/line thing, and UDEV next to it. Logs : Oct 19 09:41:03 Tower unassigned.devices: Formatting disk '/dev/nvme0n1' with 'btrfs' filesystem. Oct 19 09:41:03 Tower unassigned.devices: Format drive command: /sbin/mkfs.btrfs -f '/dev/nvme0n1p1' 2>&1 Oct 19 09:41:06 Tower kernel: BTRFS: device fsid 1b423e96-8024-4ff6-84f7-afab8c5cac56 devid 1 transid 6 /dev/nvme0n1p1 scanned by mkfs.btrfs (16493) Oct 19 09:41:07 Tower unassigned.devices: Format disk '/dev/nvme0n1' with 'btrfs' filesystem: btrfs-progs v6.3.3 See https://btrfs.readthedocs.io for more information. Performing full device TRIM /dev/nvme0n1p1 (1.82TiB) ... NOTE: several default settings have changed in version 5.15, please make sure this does not affect your deployments: - DUP for metadata (-m dup) - enabled no-holes (-O no-holes) - enabled free-space-tree (-R free-space-tree) Label: (null) UUID: 1b423e96-8024-4ff6-84f7-afab8c5cac56 Node size: 16384 Sector size: 4096 Filesystem size: 1.82TiB Block group profiles: Data: single 8.00MiB Metadata: DUP 1.00GiB System: DUP 8.00MiB SSD detected: yes Zoned device: no Incompat features: extref, skinny-metadata, no-holes, free-space-tree Runtime features: free-space-tree Checksum: crc32c Number of devices: 1 Devices: ID SIZE PATH 1 1.82TiB /dev/nvme0n1 Oct 19 09:41:10 Tower unassigned.devices: Reloading disk '/dev/nvme0n1' partition table. Oct 19 09:41:10 Tower kernel: nvme0n1: p1 Oct 19 09:41:10 Tower unassigned.devices: Reload partition table result: /dev/nvme0n1: re-reading partition table Image attached :
  21. Rebuild of parity is complete - 0 errors. Uploading diags now. If what you say is ture, and that removed 8TB drive is acually ok, then it will go into my backup unraid server with all the other replaced drives over time. I will add it in and pre-clear it for a check. tower-diagnostics-20231019-0714.zip
  22. It’s still rebuilding parity from the new config. Will do tomorrow.
  23. yep, I moved it to a higher a lot so different cables new disk in the same place it has always been now.
  24. Im still struggling with this. It was all fine before the upgrade to 6.12.4. unrad server cant ping 192.168.1.185 (shinobi vm) and the shinobi VM cant ping the unraid server (on 192.168.1.7). Every other device can ping shinobi, just not the unraid server. There is no active FW on this VM - nothing has changed except the unraid OS, with changes to the networking as such as per notes : a - Settings > Network Settings > eth0 > Enable Bridging = No b - Settings > Docker > Host access to custom networks = Enabled (was on already) c - Settings > Docker > custom network on interface eth0 (i.e. make sure eth0 is configured for the custom network, not eth1) As I use a UD nfs connection and a user script every 5 mons to rsync off the latest cctv files, this is a few days behind now. stragely, my other VM - the home assistant VM can be pinged from unraid console.