Romany

Members
  • Posts

    11
  • Joined

  • Last visited

Romany's Achievements

Noob

Noob (1/14)

1

Reputation

  1. When I try to start my nodered docker it exists with this error: 17 Oct 02:25:03 - [info] Starting flows 17 Oct 02:25:03 - [red] Uncaught Exception: 17 Oct 02:25:03 - [error] TypeError: RED.settings.functionGlobalContext.get is not a function at map (/data/node_modules/node-red-contrib-actionflows/actionflows/actionflows.js:270:51) at EventEmitter.runtimeMap (/data/node_modules/node-red-contrib-actionflows/actionflows/actionflows.js:593:5) at EventEmitter.emit (node:events:527:28) at Object.start [as startFlows] (/usr/src/node-red/node_modules/@node-red/runtime/lib/flows/index.js:405:12) Pretty sure this started when I manually updated several of my docker images last week (I have it set for manual update and it been a year or more since anything got updated). Any suggestions where to look? Or how to proceed?
  2. Not sure that's correct. The only place I see where to exclude specific dockers is on the first page (BACKUP/SETTINGS) - I think that only applies to backup. But I went thru and excluded everything but one docker - and went to the RESTORE page and started the restore process. During a normal BACKUP the script will shut down the dockers it is going to back up - and during this restore test the script shut down all of my dockers. To me that implies that it also restored all of the dockers in the archive...hoping that squid sees this question and imposes his imprimatur to remove all doubt. ...REK
  3. I have the same question also - can you just restore a specific docker and leave the others alone? I just had some issue with my unifi docker - which I run when I need it. Could not web to the GUI - console showed the DB trying to start and failing. I finally just went to my oldest backup created by this script - un-tarred to a temp directory - and cp -a -r the unifi directory over to the appdata directory - that fixed my issue. If I had to do a "global" restore with this script I would have lost a lot of recent changes in my other dockers....I can work around this limitation - if indeed there is no way in the script to do that... ..Romany
  4. So I have a ESPhome docker installed on Unraid - IP address of 192.168.1.16. I have a Docker of Zoneminder (ZM) installed with an address of 192.168.4.20. I have 2 networks obviously - the 192.168.4.20 is VL400. Both networks defined on my PFSENSE firewall - one for the 192.168.1.0/24 network - the other for VL400 - 192.168.4.0/24. Default GW for Unraid is 192.168.1.1. The Unraid VL400 address is 192.168.4.30. ESPhome needs to be able to reach ZM's address - the 192.168.4.20 - but cannot. After lots of digging and packet capturing this is what's happening. ESPhome will send a packet to ZM - if will stay in Unraid cloud - and the Unraid 192.168.4.30 interface will send out arps asking for the mac address of 192.168.4.20 - I see those arp requests in my packet capture on the firewall (normal ARP broadcasts) - but no response. Unraid never gets a MAC response from the ZM docker to complete the communication. The Unraid interface IP and the ZM DOcker IP are both in different "virtual" spaces apparently. If I had the ZM docker on my 192.168.1 network there would not be a problem - but I want to keep all those Chinese cameras on their own isolated layer 2 network - and ZM on it also - so that video traffic stays off my firewall. I have came up with a temporary solution involving double NATS on my firewall (probably permanent unless someone much more knowledgeable than I am on Dockers can provide a "all you need to do") solution). This is more of an academic question than anything else - having a more elegant solution is not on my bucket list... Thanks. Romany
  5. Looks like that is going to work - its about half way done rebuilding it. Wish there were a simpler way to get rid of that disable status than having to do this rebuild operation for a drive that I'm 99% sure that has no issue. Hopefully this is a rare event. I appreciate the quick help - I'll sleep better knowing that all of my data drives are back to green again :-0
  6. I'm having a similar issue here...have a drive that is now marked as disabled. I don't think the drive is bad - the issue started after I shut down and was installing another SATA drive. I have unplugged the new drive - shutdown/restarted - just to get back to the previous state. How can I re-enble this drive. This link does not work: https://wiki.unraid.net/UnRAID_6/Storage_Management#Rebuilding_a_drive_onto_itself nor does this link: https://wiki.unraid.net/UnRAID_6/Storage_Management#Replacing_disks Thanks for any help!....
  7. Will do. Thanks for the response...
  8. For some reasons my cache drives will become un-mountable after a reboot. It seems to happen after I've been shutdown for sometime making changes on my server. Case in point - I installed a new fan - powered up - cache drive un-mountable. The error message is "super_num devices 1 mismatch with num_devices 1 found here" on the primary cache drive. If you look at /var/log/syslog during the array mount process you get this: Apr 9 23:04:44 Tower emhttpd: shcmd (100): mkdir -p /mnt/cache Apr 9 23:04:44 Tower emhttpd: /mnt/cache uuid: 54f4fb0c-8643-4e1b-83de-a1566868fc69 Apr 9 23:04:44 Tower emhttpd: /mnt/cache TotDevices: 2 Apr 9 23:04:44 Tower emhttpd: /mnt/cache NumDevices: 2 Apr 9 23:04:44 Tower emhttpd: /mnt/cache NumFound: 2 Apr 9 23:04:44 Tower emhttpd: /mnt/cache NumMissing: 0 Apr 9 23:04:44 Tower emhttpd: /mnt/cache NumMisplaced: 0 Apr 9 23:04:44 Tower emhttpd: /mnt/cache NumExtra: 0 Apr 9 23:04:44 Tower emhttpd: /mnt/cache LuksState: 0 Apr 9 23:04:44 Tower emhttpd: shcmd (101): mount -t btrfs -o noatime,space_cache=v2,discard=async -U 54f4fb0c-8643-4e1b-83de-a1566868fc69 /mnt/cache Apr 9 23:04:44 Tower kernel: BTRFS info (device sdf1): turning on async discard Apr 9 23:04:44 Tower kernel: BTRFS info (device sdf1): using free space tree Apr 9 23:04:44 Tower kernel: BTRFS info (device sdf1): has skinny extents Apr 9 23:04:44 Tower root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sdg1, missing codepage or helper program, or other error. Apr 9 23:04:44 Tower kernel: BTRFS error (device sdf1): super_num_devices 1 mismatch with num_devices 1 found here Apr 9 23:04:44 Tower kernel: BTRFS error (device sdf1): failed to read chunk tree: -22 Apr 9 23:04:44 Tower kernel: BTRFS error (device sdf1): open_ctree failed Apr 9 23:04:44 Tower emhttpd: shcmd (101): exit status: 32 Apr 9 23:04:44 Tower emhttpd: /mnt/cache mount error: No file system Apr 9 23:04:44 Tower emhttpd: shcmd (102): umount /mnt/cache Apr 9 23:04:44 Tower root: umount: /mnt/cache: not mounted. Apr 9 23:04:44 Tower emhttpd: shcmd (102): exit status: 32 Apr 9 23:04:44 Tower emhttpd: shcmd (103): rmdir /mnt/cache I have 2 cache drives - they both show up as un-mountable when this problem appears. One thing I tried tonight was disconnecting one of the cache drives (primary) - booted up - same issue but not seeing the primary drive - which is expected. Shutdown - re-connected drive - booted up - same error but now with a warning that the primary cache drive will be over written. Rebooted - the warning went away but back to the un-mounted problem. Shut down - rolled the SATA cables on the cache drives - booted back up - started array - now the cache drives both mount. Just a note. I started to have this problem AFTER I moved my drives to a newer server. Right after I moved my drives - and saw that un-mountable error - I shut down - moved everything back to the old server - powered up - started array - cache drives mount normally. Shutdown - move to new system - boot up - cache drives un-mountable. So far I've been able to get the cache drives to mount by trying various things - but there is no consistency to what works. Tonight when I rolled the SATA cables it came up - but I don't know if that did it or the stars aligned under a full moon. Anyway if anyone has any suggestions... ...REK
  9. So I'm running UR on a 2012ish Dell PC - and so far performance is fine for what I'm currently using it for. But my main stand alone Ubuntu workstation is a much newer machine with lots more memory/CPU resources. I think the process to move UR to this newer workstation is straight forward enough - but is it possible to create a Ubuntu VM - pass the local USB (keyboard/mouse) and HDMI to it - so that when I turn on my monitor I see a Ubuntu logon screen - and I essentially replicate my stand alone experience. Now that complicates matters a lot - managing UR from one of its VM is one - I would like to have the VM have its own IP address on the local network. No NAT or PAT. Also if there are issues I need a stand alone computer available. I worked for this one manager that left me a bit of IT wisdom that I try not to forget - just because something is possible does not mean you should do it. Looking for comments from folks can I do this - and should I do this? Thanks!
  10. Nope - I think I can follow those bread crumbs :-0...Thanks for the quick response Jonathanm. I've been a linux fan for many years and it nice to see that UR is built on this foundation. And not only built on it but *accessible*! ...Romany
  11. So I've been using UR for couple weeks now and I'm ready to pull the trigger and purchase it. The current USB bootup fob that I'm using is pretty old - and I want to move to a newer one. And I would like to keep my current configuration. So what the best way to do that? I'm sure that question has been ask before and I'm hoping you have a good canned answer for me....;-) ....Romany