-
Posts
894 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by EdgarWallace
-
-
3 minutes ago, JorgeB said:
I believe they are stored in a cookie, are you clearing your cookies on browser close?
Yes - everytime I am closing my browser. Alright I think that I am going to mark this as closed - works as designed.
-
Thanks @JorgeB my issue is that this approach doesn‘t survive a reboot. Eeven closing and reopening the browser is setting everything back to standard (= showing everything). I was expecting that my settings would be persistent.
-
Is there a way (or is it planned) to permanently set the sortable items in the dashboard? I would like to permanently hide some of the categories.
-
I hope that this is the correct forum. My guess is that I am using wrong NPM setting this is why...
Just migrated from SWAG to NPM, all guides on page #1 have been read and the tests went well.
DDNS and DNS settings (CNAME) are managed by all-inkl.com. I can reach calibre and emby dockers by using e.g. emby.mynetwork.com.
Nextcloud is accessible under mynetwork.com. I have defined collabora.mynetwork.com but I can't open any files. This is not a surprise because Nextcloud isn't accepting the address collabora.mynetwork.com (Es konnte keine Verbindung zum Collabora Online-Server hergestellt werden). Some details:
- Nextcloud is the lsio docker which is based on alpine, hence can't use the integrated COLLABORA CODE.
- NPM - I was using the advanced setting from here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/70
- Collabora docker is showin OK by using entering collabora.mynetwork.com
- All dockers are BRIDGE
Has anyone a well running setup using the same dockers?
Nextcloud error:
[richdocuments] Fehler: GuzzleHttp\Exception\ConnectException: cURL error 28: Connection timed out after 45001 milliseconds (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://collabora.mynetwork.com:9980/hosting/capabilities at <<closure>> 0. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Handler/CurlFactory.php line 158 GuzzleHttp\Handler\CurlFactory::createRejection("*** sensitive parameters replaced ***") 1. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Handler/CurlFactory.php line 110 GuzzleHttp\Handler\CurlFactory::finishError() 2. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Handler/CurlHandler.php line 47 GuzzleHttp\Handler\CurlFactory::finish() 3. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Middleware.php line 137 GuzzleHttp\Handler\CurlHandler->__invoke() 4. /app/www/public/lib/private/Http/Client/DnsPinMiddleware.php line 114 GuzzleHttp\Middleware::GuzzleHttp\{closure}("*** sensitive parameters replaced ***") 5. /app/www/public/3rdparty/guzzlehttp/guzzle/src/PrepareBodyMiddleware.php line 35 OC\Http\Client\DnsPinMiddleware->OC\Http\Client\{closure}("*** sensitive parameters replaced ***") 6. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Middleware.php line 31 GuzzleHttp\PrepareBodyMiddleware->__invoke() 7. /app/www/public/3rdparty/guzzlehttp/guzzle/src/RedirectMiddleware.php line 71 GuzzleHttp\Middleware::GuzzleHttp\{closure}("*** sensitive parameters replaced ***") 8. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Middleware.php line 63 GuzzleHttp\RedirectMiddleware->__invoke() 9. /app/www/public/3rdparty/guzzlehttp/guzzle/src/HandlerStack.php line 75 GuzzleHttp\Middleware::GuzzleHttp\{closure}("*** sensitive parameters replaced ***") 10. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Client.php line 331 GuzzleHttp\HandlerStack->__invoke() 11. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Client.php line 168 GuzzleHttp\Client->transfer() 12. /app/www/public/3rdparty/guzzlehttp/guzzle/src/Client.php line 187 GuzzleHttp\Client->requestAsync("*** sensitive parameters replaced ***") 13. /app/www/public/lib/private/Http/Client/Client.php line 226 GuzzleHttp\Client->request() 14. /config/www/nextcloud/apps/richdocuments/lib/Service/CapabilitiesService.php line 135 OC\Http\Client\Client->get() 15. /config/www/nextcloud/apps/richdocuments/lib/Service/CapabilitiesService.php line 73 OCA\Richdocuments\Service\CapabilitiesService->refetch() 16. /config/www/nextcloud/apps/richdocuments/lib/AppInfo/Application.php line 90 OCA\Richdocuments\Service\CapabilitiesService->getCapabilities() 17. /app/www/public/lib/private/AppFramework/Bootstrap/FunctionInjector.php line 45 OCA\Richdocuments\AppInfo\Application->OCA\Richdocuments\AppInfo\{closure}("*** sensitive parameters replaced ***") 18. /app/www/public/lib/private/AppFramework/Bootstrap/BootContext.php line 50 OC\AppFramework\Bootstrap\FunctionInjector->injectFn() 19. /config/www/nextcloud/apps/richdocuments/lib/AppInfo/Application.php line 89 OC\AppFramework\Bootstrap\BootContext->injectFn() 20. /app/www/public/lib/private/AppFramework/Bootstrap/Coordinator.php line 200 OCA\Richdocuments\AppInfo\Application->boot() 21. /app/www/public/lib/private/App/AppManager.php line 437 OC\AppFramework\Bootstrap\Coordinator->bootApp() 22. /app/www/public/lib/private/App/AppManager.php line 216 OC\App\AppManager->loadApp() 23. /app/www/public/lib/private/legacy/OC_App.php line 126 OC\App\AppManager->loadApps() 24. /app/www/public/ocs/v1.php line 58 OC_App::loadApps() 25. /app/www/public/ocs/v2.php line 23 require_once("/app/www/public/ocs/v1.php") GET /ocs/v2.php/apps/notifications/api/v2/notifications from 194.191.235.184 by oliver at 2023-09-04T13:05:01+02:00
I fixed my Nextcloud/Collabora integration issues with the guide I found here:
https://help.nextcloud.com/t/nextcloud-collabora-integration/151879
-
Thanks @ScottAS2
There were only 20k left on the drive and i think that's why i couldn't delete the old datasets. I then deleted the complete TM-set and now everything is up and running again.
- 1
-
Great explanation, thanks a lot @ScottAS2 I wasn't aware that 2.5 TB is possible but no complaints with that setting from a dockers perspective 🙂
TM is reporting 1.32 TB available but it's still not running. Hopefully I don't have to delete files manually.....
-
This Docker is great and it is backing up one MBAir (256GB) one MBPro (512GB) and one iMac (1TB). It was running rock solid but now it caused an issue: I am using disk1 exclusively for this TM Docker and it filled up the drive by 100%. It's a 3TB drive and Docker run command is:
-e 'VOLUME_SIZE_LIMIT'='3 T'
I thought that TM is removing the oldest files. What is best practice?
-
On 2/8/2016 at 4:04 PM, ken-ji said:
Add the docker and configure the folders:
refer to attachment: Container Mounts.png
Then check the docker container logs for a link to copy and paste to link your dropbox container.
I do not understand how to add the link that I am getting via the log.
As soon as I am clicking on the link and basically linking unRIAD Dropbox to my Dropbox account I am getting another error (please see screenshot).
-
7 minutes ago, Squid said:
I don't use Emby, but another user here on Plex a while back had Plex set to automatically delete watched episodes after a while.
There was a issue with emby: https://emby.media/community/index.php?/topic/69848-feature-option-preventing-emby-from-deleting-media-files/
I have now revoked permission for all emby users to delete anything. A bug in emby was my first interpretation for the data loss, but horror crept in when I saw that some other files were missing....
-
Btw. since I am having installed 6.10.0-rc2 my server always comes up with a parity check....is it possible that this has something to do with the deletions?
-
Thanks Squid!!!
One example: I had all episodes in season 1 of The Walking dead. Now only "The walking Dead-S01E06.mkv" is left. I might have made a mistake when I did a data rebuild a year ago or so....seems that I have to check my backup file by file?
-
Definitely in /mnt/user/iTunes and /mnt/user/Serien.
I am afraid that there is more that I don‘t know yet.
-
Recently I discovered that a fair amount of TV shows were missing and I thought that EMBY was the culprit. Today I saw that music files have also been deleted. I am really don't trust my server anymore after running unRAID since more than 10 years.
Diagnostics attached, thanks for any help or hint.
-
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='openHAB' --net='br0' --ip='192.168.178.22' --cpuset-cpus='5' --privileged=true -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -v '/etc/localtime':'/etc/localtime':'ro' -v '/mnt/user/system/appdata/openhab/conf/':'/openhab/conf':'rw' -v '/mnt/user/system/appdata/openhab/userdata/':'/openhab/userdata':'rw' --device=/dev/ttyACM0 --shm-size 2g 'openhab/openhab:latest-debian' /bin/chmod o+rw /dev/ttyACM0 && /entrypoint.sh 55c064827917c2b56ce5dbc5983f7d635623dec20f4aca7477e2b0fad4ef2e7c sh: /entrypoint.sh: No such file or directory
...doesn't like the command...
-
3 hours ago, knex666 said:
add && /entrypoint.sh
if you use post arguments it will loose its entry point.
What does that mean please? Should I enter this into the Post Arguments box?
/bin/chmod o+rw /dev/ttyACM0 && /entrypoint.sh
-
33 minutes ago, BikTek said:
Do you have any plans to release an openHAB 3 Docker into the community applications?
If you are using openhab/openhab:latest-debian as openHAB Repository you are getting that update automatically.
- 1
-
10 hours ago, knex666 said:Hi, you can try this with the Post Argument but only the chown you do not need docker exec.
If the docker is not starting then add the entry point after your post Argument. Would be great if it does.Unfortunately openHAB doesn‘t start when I am adding /bin/chmod o+rw /dev/ttyACM0 into the Post Arguments box....
-
There is on comment on Troubleshooting of OH3:
QuoteIf you want use an USB stick (for example for Z-Wave network), then it will be not available for the dockerized system by default. In Docker openHAB is running in name of openhab, a restricted user. The stick will work if you run the following command right after docker image is started.
docker exec \ -d \ openhab \ /bin/chmod o+rw /dev/ttyACM0
Is there any way to and that into the Docker template, or how to add ist myself? Maybe adding ist into the Post Arguments?
-
I was trying to get Collabora running as seperate Docker: I installed the container with default name (collabora), renamed the sample proxy conf, added the subdomain in SWAG and as CNAME at my DNS provider. SWAG log is showing:
[cont-init.d] done. [services.d] starting services nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 Server ready nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 ......
[cont-init.d] done. [services.d] starting services nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 Server ready nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 ......
I thought that I followed @SpaceInvaderOne guide quite well but obviously I made a mistake.....
-
12 hours ago, saarg said:
I have no idea about collabora. Why not install it as a container and point nextcloud to it?
Thanks @saarg because I would need to set up a subdomain which is much more complex than using the build in Collabora (or Onlyoffice) app. I might be better off migrating to knex666 Nextcloud docker with full Onlyoffice integration based on Apache.
-
Thanks @BRiT that is working great!!!
-
On 6/18/2020 at 8:58 AM, EdgarWallace said:
Thanks a lot @saarg
I just need one of these apps Collabora would be perfectly fine. I got the link from Collabora. AppImages does not run on Alpine which is the root cause that integrated Collabora app isn‘t running on Alpine Dockers....at least how I understood it.
Sorry to bother again....is it realistic waiting for a fix of the integrated Collabora App?
Thanks a lot.
-
12 hours ago, saarg said:
Nothing we can do to fix the only office stuff. Don't wait for a fix, as it won't come. Just use a separate container for it.
What does appimage got to do with this?
Thanks a lot @saarg
I just need one of these apps Collabora would be perfectly fine. I got the link from Collabora. AppImages does not run on Alpine which is the root cause that integrated Collabora app isn‘t running on Alpine Dockers....at least how I understood it.
-
On 5/15/2020 at 11:08 AM, saarg said:
It's better to go ask Nextcloud why they chose to add an ubuntu only binary.
There is nothing we can do about it.
I would kindly ask for advise:
I was never successful in having the Collabora or Onlyoffice Docker running with my Nextcloud install and having read @saarg post is not giving big hope. The Collabora support replied on my question that I am getting an error message by accessing the build-in Collabora app:
linuxserver.io Nextcloud docker is based on Alpine which uses musl libc instead of glibc - and AppImages do not work with musl libc:
https://github.com/AppImage/AppImageKit/issues/1015
I would like to stick to my linuxserver.io Nextcloud install that is running rock solid since years rather than using another Nextcloud container. Is there a small chance that the issues will be resolved from your linuxserver.io side? Really, that is a question, NOT a complaint. Thank you.
Sortable Items Setting's
in Feature Requests
Posted
Making the setting of Sortable Items on the dashboard persistent. Seems that the settings are saved via browser cookie.