spants
-
Posts
637 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by spants
-
-
Sorry
- I had forgotten about adding MQTT passwords, I will add it shortly.
- I am also planning some updates to Node-Red nodes
Tony
-
ah - I cleared browser cache and it now works!.
Thanks for looking
-
on 6.1-rc4?
I also see this in my syslog:
Aug 15 20:08:52 Tower emhttp: Deprecated relative cmd path: docker logs --tail=350 -f spotweb
Aug 15 20:08:52 Tower emhttp: /usr/local/emhttp/docker logs --tail=350 -f spotweb 2>&1
Aug 15 20:08:59 Tower emhttp: Deprecated relative cmd path: docker logs --tail=350 -f spotweb
Aug 15 20:08:59 Tower emhttp: /usr/local/emhttp/docker logs --tail=350 -f spotweb 2>&1
Aug 15 20:09:03 Tower emhttp: Deprecated relative cmd path: docker logs --tail=350 -f Sonarr
Aug 15 20:09:03 Tower emhttp: /usr/local/emhttp/docker logs --tail=350 -f Sonarr 2>&1
-
I get this error when I click any log icons on the docker tab, is it just me?
sh: /usr/local/emhttp/docker: No such file or directory
-
I'm using a HH5 with unraid... Would like to replace it for something with reliable wifi sometime soon.
I see it in the HH5 menu and I use portforwarding ok (openvpn etc)
-
Just to add, I am having exactly the same issues!
Tony
Here is my log...
*** Running /etc/my_init.d/001-fix-the-time.sh...
Current default time zone: 'America/Los_Angeles'
Local time is now: Wed Jun 3 01:45:59 PDT 2015.
Universal Time is now: Wed Jun 3 08:45:59 UTC 2015.
*** Running /etc/my_init.d/002-set-the-config.sh...
config.js exists in /config, may require editing
config.py exists in /config, may require editing
groups.json exists in /config, may require editing
*** Running /etc/my_init.d/003-postgres-initialise.sh...
initialising empty databases in /data
completed initialisation
2015-06-03 01:46:06,085 CRIT Supervisor running as root (no user in config file)
2015-06-03 01:46:06,088 INFO supervisord started with pid 55
2015-06-03 01:46:07,091 INFO spawned: 'postgres' with pid 59
2015-06-03 01:46:07,103 INFO exited: postgres (exit status 2; not expected)
2015-06-03 01:46:08,105 INFO spawned: 'postgres' with pid 60
2015-06-03 01:46:08,117 INFO exited: postgres (exit status 2; not expected)
2015-06-03 01:46:10,121 INFO spawned: 'postgres' with pid 61
2015-06-03 01:46:10,133 INFO exited: postgres (exit status 2; not expected)
setting up pynab user and database
2015-06-03 01:46:13,138 INFO spawned: 'postgres' with pid 87
2015-06-03 01:46:13,150 INFO exited: postgres (exit status 2; not expected)
2015-06-03 01:46:14,151 INFO gave up: postgres entered FATAL state, too many start retries too quickly
pynab user and database created
building initial nzb import
THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER
IMPORT COMPLETED
*** Running /etc/my_init.d/004-set-the-groups.sh...
Testing whether database is ready
database appears ready, proceeding
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get
return self._pool.get(wait, self._timeout)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get
raise Empty
sqlalchemy.util.queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get
self._dec_overflow()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__
self.connection = self.__connect()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect
connection = self.__pool._invoke_creator(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/pynab/pynab.py", line 258, in
group_list()
File "/opt/pynab/pynab.py", line 177, in group_list
groups = pynab.groupctl.group_list()
File "/opt/pynab/pynab/groupctl.py", line 72, in group_list
for group in groups:
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2515, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances
close_with_result=True)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session
**kw)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 882, in connection
execution_options=execution_options)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
engine, execution_options)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind
conn = bind.contextual_connect()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect
e, dialect, self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1403, in _handle_dbapi_exception_noconnection
exc_info
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 181, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get
self._dec_overflow()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__
self.connection = self.__connect()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect
connection = self.__pool._invoke_creator(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
-
Tried both..
I am suspecting a flaky drive (although no errors in unraid) so I am pre clearing a couple to see if another drive works better.
I did try a different user share and it seemed to create the sparse disk bundle ok.
Will try again tomorrow when the drives are finished clearing :-)
-
Interesting, if I create a sparsebundle on a different smb server shared drive and copy to my unraid... it starts to backup.
So it seems as though the is a problem is due to Unraid creating these files.....
-
I'm using Unraid 6RCx and having all sorts of issues backing up my Macbook.
My preferred method is to use CCC with smb by creating a sparseimage disk file:
1) I have created a public share
2) CCC set to create the sparseimage disk (acknowledge the 4GB limit if FAT32)
3) It starts to create a file macbook.sparseimage (showing 876.6MB in Finder)
4) After 5 mins of trying to create it, it gives an error that file cannot be created and deletes the file (Destination filesystem not responding)
With Time Machine it would backup (AFP switched on) but when I tried to look at files created in the past, it would take forever to find anything.
I'm using OSX 10.10.3
Anyone got this working?
Tony
-
Hmm, all of my shares show the yellow triangle meaning: Some or all files are on unprotected storage....
Parity is ok and I'm not using a cache drive. Any ideas?
-
Glad its working!. I did a similar thing with my Tado heating system.
Why not use emoncms as the back end for the energy data?. Actually that may be a great docker to build......
@Dallas.toth, which nodes would you like?
-
could you create a share, limited to one disk that had a minimum space free:
eg 1TB disk with minimum space 800GB
Other shares could also use the drive so it isn't wasted space?
(or is this an unpaid 6 only option?)
-
Guacamole fonts....
On an early Unraid6b everything looked great on ssh sessions but now I'm using 6b15 the fonts for ssh are not correct - see attachment. I have tried changing the size and it makes the font bigger but the gaps are still there.
Do I set a default terminal type for ssh somewhere or is it in Guacamole?. The Guacamole boards talk about installing other fonts but I'm not sure that is the problem.
I have 6B15 with Nerdpack plugin (has Screen command - could that cause it?)
-
Recovering one file is simple. Bare metal recovery requires several more steps than TM and possibly booting from a flash drive...
Thanks, I did some research and finally understood the process!. I have set up a share using AFP and using CCC with sparseimagebundle.
Finally I will be able to get rid of the osx bug that spins up my external backup hdd in Finder
-
I Use CCC with a sparseimage on SMB and have never had an issue. I've found it to be more reliable than TM on AFP which requires a fresh start once or twice a year.
I don't have a basic installation but I've provided instructions that allow you to test the behavior by creating a new unRAID user.
Interesting - I use CCC and have just set up a disk to use timemachine. I would prefer to use CCC but I was wondering how would you do a bare metal recovery from a sparseimage on unraid?. Is it easy just to recover one file from the sparseimage?
regards
Tony
-
Solved it!....
I overlooked that the installation of the software uses the root directory of the server, not /ajaxplorer as shown in the defaults and help page
So on the setting page for WEBDAV, change the Shares URI to /shares (default is ajaxplorer/shares)
Working great now, thanks!
-
k, will have to play some more. I would like to use webdav with Expandrive on my Mac...
cheers
tony
-
Looks good but I'm having problems with webdav - anyone have it working?
(is the rewrite engine enabled in apache = common problem apparently, or it may just be me)
-
what disk format is the usb disk in?
gfjardim released a tool to mount USB disks on Unraid that may help (it also installs NTFS3G)
Personally I would try and mount it on a sata controller (take drive out of the usb box) and copy directly.
Tony
-
Spants please!
-
solved - thanks...
Do you know why the repository is showing as Spant and not Spants? Is it my fault?
-
Might want a larger file than 10g ... I ran out of room on mine
-
getting an error on updating repositories:
Warning: DOMDocument::load(): Opening and ending tag mismatch: Environment line 17 and Container in /var/lib/docker/unraid/templates-community/templates/hurricane/BTSync.xml, line: 53 in /usr/local/emhttp/plugins/community.repositories/scripts/exec.php on line 162 Warning: DOMDocument::load(): Premature end of data in tag Container line 2 in /var/lib/docker/unraid/templates-community/templates/hurricane/BTSync.xml, line: 54 in /usr/local/emhttp/plugins/community.repositories/scripts/exec.php on line 162 Warning: DOMDocument::load(): Opening and ending tag mismatch: Environment line 17 and Container in /var/lib/docker/unraid/templates-community/templates/hurricane/BTSyncFree.xml, line: 53 in /usr/local/emhttp/plugins/community.repositories/scripts/exec.php on line 162 Warning: DOMDocument::load(): Premature end of data in tag Container line 2 in /var/lib/docker/unraid/templates-community/templates/hurricane/BTSyncFree.xml, line: 54 in /usr/local/emhttp/plugins/community.repositories/scripts/exec.php on line 162
any ideas?.
also how do I change the name of my repository - it is shown as Spant rather than Spants :-)
Tony
-
here are the step-by-step commands as I just had to reinstall:
It creates two dockers, one to hold the data and one is the app. Do not manually delete any!. You could change the data volume for a directory if you wish:
docker run --name OVPN_DATA -v /etc/openvpn busybox
docker run --volumes-from OVPN_DATA --rm kylemanna/openvpn ovpn_genconfig -u udp://YOURSERVER.COM
docker run --volumes-from OVPN_DATA --rm -it kylemanna/openvpn ovpn_initpki
docker run --volumes-from OVPN_DATA -d -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
docker run --volumes-from OVPN_DATA --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass
//this last line saves the ovpn connection file to the current directory (needed for the client)
docker run --volumes-from OVPN_DATA --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn
Change CLIENTNAME and YOURSERVER to suit.
Tony
[support] Spants - NodeRed, MQTT, Dashing, couchDB
in Docker Containers
Posted
I have updated the Node-Red container - added lots more nodes, updated Node-Red and Node to the latest versions.
MQTT is next on my list. I would like to add username/password to the settings page - I'm reading up on that....