[Support] Linuxserver.io - Nextcloud


Recommended Posts

On 11/22/2020 at 11:03 PM, StanC said:

I see that version 20.0.2 is the latest version for the linuxserver.io docker, but when I do a check from Settings it still shows me 19.0.5 as the current version on the Stable channel. Is 20.0.2 only available via the Beta channel? Or is there something in my configuration holding it back?

Nextcloud decides when you will get that update. Or you could switch to the beta channel and get it now.

Link to comment
8 hours ago, saarg said:

Nextcloud decides when you will get that update. Or you could switch to the beta channel and get it now.

I first tried to do an update trough the web ui from 19.0.2 as it showed version 19.0.5 was available, but I had lots of issues with it.  Ended up following this guide for manual installation and it updated to 20.0.2.  I think when you do a manual upgrade it grabs the newest Nextcloud release.

Link to comment

I am new to unRAID and have installed this Docker container throught the Community Applications plugin by following Spaceinvader One's YouTube tutorial.

 

Everything works fine at the moment except for my upload speeds. I am testing this via LAN and it seems like I am getting atrocious upload speeds when uploading files through the WebGUI or through the Nextcloud desktop app.

 

I don't have a reverse proxy configured just yet so that can't be the reason. I also have my Nextcloud share on an SSD cache drive.

 

My LAN connection is 1 Gigabit so theoretically I should be achieving that speed but I am only getting about 100Mbps give or take. Anyone got a clue how to fix this? I have tried searching on these forums, on Reddit, and on Nextcloud's forums but there are no concrete fixes.

 

Interestingly, when I use something like the Filebrowser Docker container, I get full upload speeds to that container's share.

 

Any ideas?

Link to comment

Hi Everyone,

I'm dealing with a situation here. when I search for a file in nextcloud through the web browser, it either takes a long time to bring me a list of files that I may be looking for or it just freezes.....

 

However, when I use my android device using the app, it pops it right up! I thought it may be the database and I read Postgress does work best on nextcloud when dealing with large amount of file so I switched from MariaDB to Postgress 11 ( using unraid) but didn't help. also using Redis for memcache and all the PHP tuning is done on it as well (don't know if i did it right or wrong with the tuning).

 

I also disabled half of the apps running... still the same...

 

I'm banging my head around and I cant seem to find any answers would appreciate it if somebody could help!

 

Link to comment

Hi, I'm wondering if anyone can help me with improving my nextcloud speeds. I've having considerably slower speeds writing to my cache drive using nextcloud, using a LAN connection and SMB into the cache drive I'm averaging 8-10 mb/s, when uploading using the nextcloud web gui using the same LAN I'm having 2.1 mb/s, I'd like to get near that 8 - 10 mb/s.

 

When writing to the array using SMB my 514mb mp4 test file I averaged 3.2 mb/s, although I don't think it's writing to the array as the share I'm writing to is set to cache: yes (mover transfers files from cache to array). The mariadb app data is also in the nextcloud share not appdata, so should be on my raid 1 1TB Samsung SSD's.

 

I followed SpaceInvador One's youtube tutorial installing mariadb and using the swag reverse proxy.

 

I disabled encryption using maintenance mode as it mentions a performance penalty, but I found it performed basically the same, maybe a 5% quicker.

I'm also noticing performance quite slow downloading that same 514mb mp4 file (around 2 mb/s), so maybe it's the database or swag. Any suggestions would be appreciated. 

Link to comment
Hi, I'm wondering if anyone can help me with improving my nextcloud speeds. I've having considerably slower speeds writing to my cache drive using nextcloud, using a LAN connection and SMB into the cache drive I'm averaging 8-10 mb/s, when uploading using the nextcloud web gui using the same LAN I'm having 2.1 mb/s, I'd like to get near that 8 - 10 mb/s.
 
When writing to the array using SMB my 514mb mp4 test file I averaged 3.2 mb/s, although I don't think it's writing to the array as the share I'm writing to is set to cache: yes (mover transfers files from cache to array). The mariadb app data is also in the nextcloud share not appdata, so should be on my raid 1 1TB Samsung SSD's.
 
I followed SpaceInvador One's
 installing mariadb and using the swag reverse proxy.
 
I disabled encryption using maintenance mode as it mentions a performance penalty, but I found it performed basically the same, maybe a 5% quicker.

I'm also noticing performance quite slow downloading that same 514mb mp4 file (around 2 mb/s), so maybe it's the database or swag. Any suggestions would be appreciated. 
Download speed and upload speed shouldn't be a database problem. Database is only read to list the files and tell you the path where the file is stored so you can start downloading.
And written when you uploaded the file to store the path and the Metadata.
If you copy a file from nextcloud's console from the data dir to another dir in the docker, what speeds you get?
Try with rsync I thing it can report speeds

Sent from my Mi 10 Pro using Tapatalk

Link to comment
1 hour ago, skois said:

Download speed and upload speed shouldn't be a database problem. Database is only read to list the files and tell you the path where the file is stored so you can start downloading.
And written when you uploaded the file to store the path and the Metadata.
If you copy a file from nextcloud's console from the data dir to another dir in the docker, what speeds you get?
Try with rsync I thing it can report speeds

Sent from my Mi 10 Pro using Tapatalk
 

Thanks for the response skois

 

I’m quite a noob when it comes to the console, I did though attempt cd  inside the console and couldn’t copy the .mp4 to another folder after creating a test folder inside of /data/test. Plus other combinations for the last 30 minutes lol.

 

It came up permission denied, even when I created the folder in SMB

 

I’m guessing it not relevant, but I copied the 514mb mp4 file In side the krusader file GUI. It did the transfer in less than 10 seconds. So very fast.

 

Good to know it’s not the database, I’m not that familiar with databases, but makes sense it that keeps a record of the directories.

 

Can you tell me or link me to how I would test rsync?

Link to comment
37 minutes ago, CafeNevosa said:

Thanks for the response skois

 

I’m quite a noob when it comes to the console, I did though attempt cd  inside the console and couldn’t copy the .mp4 to another folder after creating a test folder inside of /data/test. Plus other combinations for the last 30 minutes lol.

 

It came up permission denied, even when I created the folder in SMB

 

I’m guessing it not relevant, but I copied the 514mb mp4 file In side the krusader file GUI. It did the transfer in less than 10 seconds. So very fast.

 

Good to know it’s not the database, I’m not that familiar with databases, but makes sense it that keeps a record of the directories.

 

Can you tell me or link me to how I would test rsync?

So to enter nextcloud docker console, click on the docker icon and select console option

unfortunately nextcloud docker doesn't include rsync, but since you know that with krusader did that in about 10 sec. 

While on docker's console do 

cp input_path output_path
e.g. cp  /data/admin/files/testfile.mp4 /data/test/testfile.mp4

if you want to cp folder you have to do it like this

cp -r /data/admin/files/ /data/test/
 

If it takes lot more that 10 seconds for the same file. Probably something is up with the docker engine? Not sure what you cause that. 
But this will help you narrow it down

  • Thanks 1
Link to comment

Hello, 

i have an error in the Update. I have startet it from updater.phar.

 

Here are the Errors

PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/composer/composer): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/composer): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/l10n): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/img): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Listener): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Collaboration): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Activity): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/AppInfo): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Controller): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Notification): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib/Search): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/lib): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/apps/comments/appinfo): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 772
PHP Warning:  rmdir(/config/www/nextcloud/updater/../apps/comments): Directory not empty in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 775

Here the Output from the maintenance:repair

occ maintenance:repair
 - Repair MySQL collation
     - All tables already have the correct collation -> nothing to do
 - Repair mime types
 - Clean tags and favorites
     - 0 tags of deleted users have been removed.
     - 0 tags for delete files have been removed.
     - 0 tag entries for deleted tags have been removed.
     - 0 tags with no entries have been removed.
 - Repair invalid shares
 - Move .step file of updater to backup location
 - Fix potential broken mount points
     - No mounts updated
 - Add log rotate job
 - Clear frontend caches
     - Image cache cleared
     - SCSS cache cleared
     - JS cache cleared
 - Clear every generated avatar on major updates
 - Add preview background cleanup job
 - Queue a one-time job to cleanup old backups of the updater
 - Cleanup invalid photocache files for carddav
 - Add background job to cleanup login flow v2 tokens
 - Remove potentially over exposing share links
     - No need to remove link shares.
 - Clear access cache of projects
 - Reset generated avatar flag
 - Keep legacy encryption enabled
 - Check encryption key format
 - Remove old dashboard app config data
 - Update name of the stored view
 - Fix component of birthday calendars
     - 3 birthday calendars updated.
 - Regenerating birthday calendars to use new icons and fix old birthday events without year
     - Repair step already executed
 - Fix broken values of calendar objects
    0 [->--------------------------]
 - Registering building of calendar search index as background job
     - Repair step already executed
 - Registering background jobs to update cache for webcal calendars
     - Added 0 background jobs to update webcal calendars
 - Registering building of calendar reminder index as background job
     - Repair step already executed
 - Clean up orphan event and contact data
     - 0 events without a calendar have been cleaned up
     - 0 properties without an events have been cleaned up
     - 0 changes without a calendar have been cleaned up
     - 0 cached events without a calendar subscription have been cleaned up
     - 0 changes without a calendar subscription have been cleaned up
     - 0 contacts without an addressbook have been cleaned up
     - 0 properties without a contact have been cleaned up
     - 0 changes without an addressbook have been cleaned up
 - Remove activity entries of private events
     - Removed 0 activity entries
 - Fix the share type of guest shares when migrating from ownCloud
 - Copy the share password into the dedicated column
 - Set existing shares as accepted
 - Purify and migrate collected mail addresses
    0 [----->----------------------]
 - Insert background jobs for all accounts
    0 [--------->------------------]
 - Make Mail itinerary extractor executable
 - Migrate Mail provisioning config from config.php to the database
     - No old config found
 - Create or update provisioned Mail accounts
     - No Mail provisioning config set
 - Update OAuth token expiration times
 - Create help command
 - Invalidate access cache for projects conversation provider
     - Invalidation not required
 - Switches from default updater server to the customer one if a valid subscription is available
     - Repair step already executed
 - Send an admin notification if monthly report is disabled
 - Add background job to check for backup codes
 - Populating added database structures for workflows

But the website says: "Update in process." What can i do?

 

 

////////////////////////////////////////////////////////

i solved the problem. I followed the manuel update instructions. 

 

Thanks for the manuel!

 

Edited by kellekellner
Solved
Link to comment

Need help got stuck trying to update to 20.0.2 and I had to restore from backup.

I unpacked the Maria DB and the appdata for Nextcloud and got it up and running again!

 

BUT Nextcloud wants to upgrade to:

image.png.50ab51e1b824bb2ee5f747d7da44383d.png

And every time I try to do this I get this:

image.png.42b8d86e4d510e8e98ead16e8f3c09a6.png

And when I push continue update....

image.png.56e6407579199fffb9544aa2944d7b29.png

I get the "Go back..." and then I am back to Home page of Nextcloud?

 

I also tried:

image.png.d8958dc9e4c3921df71e4aeab7273c9a.png

Same thing no difference?

Appreciate any input you might have to rectify this problem?

 

Link to comment

Update: Running the repair:

sudo -u abc php /config/www/nextcloud/occ maintenance:repair
sudo -u abc php updater.phar 

 

Fixed the upgrade... Not sure I want to try upgrading a 2 time for 20.0.2

ARGH failed again.....

 

image.thumb.png.3edb93501becfc231eb78e3a657243e9.png

 

Any input on what to try

next?

Edited by casperse
Link to comment
On 11/28/2020 at 12:40 PM, skois said:

So to enter nextcloud docker console, click on the docker icon and select console option

unfortunately nextcloud docker doesn't include rsync, but since you know that with krusader did that in about 10 sec. 

While on docker's console do 

cp input_path output_path
e.g. cp  /data/admin/files/testfile.mp4 /data/test/testfile.mp4

if you want to cp folder you have to do it like this

cp -r /data/admin/files/ /data/test/
 

If it takes lot more that 10 seconds for the same file. Probably something is up with the docker engine? Not sure what you cause that. 
But this will help you narrow it down

I have also followed Spaceinvader One's guide and having sort of the same issue and i can't understand why. Copy within the docker console works as expected, 5gb file copies in 8-10 seconds.

 

I am getting slow uploads only (10MB/s), downloads are fine (90-120MB/s). There are som topics concering this issue over at Nextclouds forum but I am not getting anywhere with the solutions suggested over there.

 

https://help.nextcloud.com/t/slow-upload-speed-i-need-advice/60909/14

 

I can't remember this being an issue looking back on using Nextcloud for almost 2 years. I had some trouble uploading large files and had to make some edits to the .conf file for swag to fix that.

Other from that, nothing has changed.

 

Link to comment

Hi Guys and Gals,

 

need a little help with nextcloud locking up at random intervals anything between 1 min to 18 hours, its been running since the beginning of lockdown just fine untill about a week ago,

 

the use case and how its used,

 

id say i have 10 machines running the sync desktop app as my mum uses it at work to sync all the office documents (not massive files and not alot of files tbh) 

id say i have 2 maybe 3 mobile syncs, as thats why i started this in the first place i got fed up of paying for drop box and always wanted my own server ;)

id say total theres probably about 100gb of data been uploaded to nextcloud, mostly my own camera roll.

 

so the problem: nextcloud locks up when browsing to the site i get 504 Gateway timeout, so try to restart nextcloud and it wont restart, try to stop nextcloud it just keeps the play icon there and wont stop, trying to stop the docker service all the other containers will stop but not nextcloud, unraid itsself wont reboot without me having to power cycle my machine,

 

once my machine boots again unraid starts my windows 10 vm boots and i login to the unraid web ui all is fine, i navigate to nextcloud and the page works and start syncing as it should be, then again after an random time it will lockup again exactly the same issue.

 

things ive tried so far:

ive upgraded from 18.0.6 i think, i went to 19.0.5 had to do 3 updates to make it that far (didnt help)

removed plugins from nextcloud antivirus was showing errors in nextclouds internal log (didnt help)

the container log shows nothing everything starts as it should,

 

Nextcloud Container Log,

 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 40-config: executing...
[cont-init.d] 40-config: exited 0.
[cont-init.d] 50-install: executing...
[cont-init.d] 50-install: exited 0.
[cont-init.d] 60-memcache: executing...
[cont-init.d] 60-memcache: exited 0.
[cont-init.d] 70-aliases: executing...
[cont-init.d] 70-aliases: exited 1.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

 

the log at the top right of unraid doesnt seen to show an issue, im not sure where you guys get your diagnostics from, if you let me know i will happily post it,

 

im at a loss to be honest right now, i have other contains running through swag to get to the outside world but they work fine so i dont believe its anything to do with swag, it seems to be specifically the nextcloud container.

 

personally im thinking maybe nuke the container and start again but having never had to do this before im weary as ive alot of data and users within nextcloud with encryption keys somewhere and im so worried about loosing data,

 

any ideas where to look or what i could check chaps? im clutching at straws here

 

thanks in advance

 

James

Link to comment
On 11/4/2020 at 3:20 PM, skois said:

For anyone having the same problem anyway, the fix for me was this
Just replace the PHP_VER with whatever you have on Settings -> System (Bottom of the page)
 


PHP_VER="7.3.24" && \
BUILD_PACKAGES="wget build-base php7-dev" && \

apk add --no-cache --virtual .php-build-dependencies $BUILD_PACKAGES && \
apk add --no-cache --repository https://dl-3.alpinelinux.org/alpine/edge/testing/ gnu-libiconv-dev && \
(mv /usr/bin/gnu-iconv /usr/bin/iconv; mv /usr/include/gnu-libiconv/*.h /usr/include; rm -rf /usr/include/gnu-libiconv) && \
mkdir -p /opt && \
cd /opt && \
wget https://secure.php.net/distributions/php-$PHP_VER.tar.gz && \
tar xzf php-$PHP_VER.tar.gz && \
cd php-$PHP_VER/ext/iconv && \
phpize && \
./configure --with-iconv=/usr && \
make && \
make install && \
mkdir -p /etc/php7/conf.d && \
#next command not needed in LSIO Docker
#echo "extension=iconv.so" >> /etc/php7/conf.d/iconv.ini && \

apk del .php-build-dependencies && \
rm -rf /opt/*

 

 

First, thank you for your fix above - it worked perfect...  then came the next container update....  same error but your fix returns an error where there was none last time.  Any idea how to fix this error?  Here is the entire interaction with your fix:

 

root@3e69a0cc162f:/# PHP_VER="7.3.25" && \
> BUILD_PACKAGES="wget build-base php7-dev" && \
> 
> apk add --no-cache --virtual .php-build-dependencies $BUILD_PACKAGES && \
> apk add --no-cache --repository https://dl-3.alpinelinux.org/alpine/edge/testing/ gnu-libiconv-dev && \
> (mv /usr/bin/gnu-iconv /usr/bin/iconv; mv /usr/include/gnu-libiconv/*.h /usr/include; rm -rf /usr/include/gnu-libiconv) && \
> mkdir -p /opt && \
> cd /opt && \
> wget https://secure.php.net/distributions/php-$PHP_VER.tar.gz && \
> tar xzf php-$PHP_VER.tar.gz && \
> cd php-$PHP_VER/ext/iconv && \
> phpize && \
> ./configure --with-iconv=/usr && \
> make && \
> make install && \
> mkdir -p /etc/php7/conf.d && \
> #next command not needed in LSIO Docker
> #echo "extension=iconv.so" >> /etc/php7/conf.d/iconv.ini && \
> 
> apk del .php-build-dependencies && \
> rm -rf /opt/*
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/1) Upgrading .php-build-dependencies (20201205.073029 -> 20201205.074158)
OK: 558 MiB in 240 packages
fetch https://dl-3.alpinelinux.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
OK: 558 MiB in 240 packages
mv: cannot stat '/usr/bin/gnu-iconv': No such file or directory
mv: cannot stat '/usr/include/gnu-libiconv/*.h': No such file or directory
--2020-12-05 01:41:59--  https://secure.php.net/distributions/php-7.3.25.tar.gz
Resolving secure.php.net (secure.php.net)... 185.85.0.29
Connecting to secure.php.net (secure.php.net)|185.85.0.29|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.php.net/distributions/php-7.3.25.tar.gz [following]
--2020-12-05 01:42:00--  https://www.php.net/distributions/php-7.3.25.tar.gz
Resolving www.php.net (www.php.net)... 185.85.0.29
Connecting to www.php.net (www.php.net)|185.85.0.29|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19675266 (19M) [application/octet-stream]
Saving to: 'php-7.3.25.tar.gz.1'

php-7.3.25.tar.gz.1               100%[=============================================================>]  18.76M  9.79MB/s    in 1.9s    

2020-12-05 01:42:03 (9.79 MB/s) - 'php-7.3.25.tar.gz.1' saved [19675266/19675266]

Configuring for:
PHP Api Version:         20180731
Zend Module Api No:      20180731
Zend Extension Api No:   320180731
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for a sed that does not truncate output... /bin/sed
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking how to run the C preprocessor... cc -E
checking for icc... no
checking for suncc... no
checking whether cc understands -c and -o together... yes
checking for system library directory... lib
checking if compiler supports -R... no
checking if compiler supports -Wl,-rpath,... yes
checking build system type... x86_64-pc-linux-musl
checking host system type... x86_64-pc-linux-musl
checking target system type... x86_64-pc-linux-musl
checking for PHP prefix... /usr
checking for PHP includes... -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib
checking for PHP extension directory... /usr/lib/php7/modules
checking for PHP installed headers prefix... /usr/include/php7
checking if debug is enabled... no
checking if zts is enabled... no
checking for re2c... no
configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers.
checking for gawk... no
checking for nawk... no
checking for awk... awk
checking if awk is broken... no
checking for iconv support... yes, shared
checking for libiconv in -liconv... yes
checking if iconv is glibc's... no
checking if using GNU libiconv... no
checking if iconv is Konstantin Chuguev's... no
checking if using IBM iconv... no
checking if iconv supports errno... yes
checking if iconv supports //IGNORE... no
checking if your cpp allows macro usage in include lines... yes
checking for ld used by cc... /usr/x86_64-alpine-linux-musl/bin/ld
checking if the linker (/usr/x86_64-alpine-linux-musl/bin/ld) is GNU ld... yes
checking for /usr/x86_64-alpine-linux-musl/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking whether ln -s works... yes
checking how to recognize dependent libraries... pass_all
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking the maximum length of command line arguments... 98304
checking command to parse /usr/bin/nm -B output from cc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if cc supports -fno-rtti -fno-exceptions... no
checking for cc option to produce PIC... -fPIC
checking if cc PIC flag -fPIC works... yes
checking if cc static flag -static works... yes
checking if cc supports -c -o file.o... yes
checking whether the cc linker (/usr/x86_64-alpine-linux-musl/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no

creating libtool
appending configuration tag "CXX" to libtool
configure: creating ./config.status
config.status: creating config.h
config.status: config.h is unchanged
/bin/sh /opt/php-7.3.25/ext/iconv/libtool --mode=compile cc -I"/usr/include" -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -I. -I/opt/php-7.3.25/ext/iconv -DPHP_ATOM_INC -I/opt/php-7.3.25/ext/iconv/include -I/opt/php-7.3.25/ext/iconv/main -I/opt/php-7.3.25/ext/iconv -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib  -DHAVE_CONFIG_H  -I/usr/include -g -O2   -c /opt/php-7.3.25/ext/iconv/iconv.c -o iconv.lo 
mkdir .libs
 cc -I/usr/include -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -I. -I/opt/php-7.3.25/ext/iconv -DPHP_ATOM_INC -I/opt/php-7.3.25/ext/iconv/include -I/opt/php-7.3.25/ext/iconv/main -I/opt/php-7.3.25/ext/iconv -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib -DHAVE_CONFIG_H -I/usr/include -g -O2 -c /opt/php-7.3.25/ext/iconv/iconv.c  -fPIC -DPIC -o .libs/iconv.o
/opt/php-7.3.25/ext/iconv/iconv.c: In function 'zm_startup_miconv':
/opt/php-7.3.25/ext/iconv/iconv.c:287:4: error: '_libiconv_version' undeclared (first use in this function)
  287 |    _libiconv_version >> 8, _libiconv_version & 0xff);
      |    ^~~~~~~~~~~~~~~~~
/opt/php-7.3.25/ext/iconv/iconv.c:287:4: note: each undeclared identifier is reported only once for each function it appears in
/opt/php-7.3.25/ext/iconv/iconv.c: In function '_php_iconv_appendl':
/opt/php-7.3.25/ext/iconv/iconv.c:184:15: warning: implicit declaration of function 'libiconv'; did you mean 'iconv'? [-Wimplicit-function-declaration]
  184 | #define iconv libiconv
      |               ^~~~~~~~
/opt/php-7.3.25/ext/iconv/iconv.c:468:8: note: in expansion of macro 'iconv'
  468 |    if (iconv(cd, (char **)&in_p, &in_left, (char **) &out_p, &out_left) == (size_t)-1) {
      |        ^~~~~
make: *** [Makefile:194: iconv.lo] Error 1
root@3e69a0cc162f:/opt/php-7.3.25/ext/iconv# 

any feedback/help is greatly appreciated.

Link to comment
3 hours ago, NVrEnough said:

First, thank you for your fix above - it worked perfect...  then came the next container update....  same error but your fix returns an error where there was none last time.  Any idea how to fix this error?  Here is the entire interaction with your fix:

 


root@3e69a0cc162f:/# PHP_VER="7.3.25" && \
> BUILD_PACKAGES="wget build-base php7-dev" && \
> 
> apk add --no-cache --virtual .php-build-dependencies $BUILD_PACKAGES && \
> apk add --no-cache --repository https://dl-3.alpinelinux.org/alpine/edge/testing/ gnu-libiconv-dev && \
> (mv /usr/bin/gnu-iconv /usr/bin/iconv; mv /usr/include/gnu-libiconv/*.h /usr/include; rm -rf /usr/include/gnu-libiconv) && \
> mkdir -p /opt && \
> cd /opt && \
> wget https://secure.php.net/distributions/php-$PHP_VER.tar.gz && \
> tar xzf php-$PHP_VER.tar.gz && \
> cd php-$PHP_VER/ext/iconv && \
> phpize && \
> ./configure --with-iconv=/usr && \
> make && \
> make install && \
> mkdir -p /etc/php7/conf.d && \
> #next command not needed in LSIO Docker
> #echo "extension=iconv.so" >> /etc/php7/conf.d/iconv.ini && \
> 
> apk del .php-build-dependencies && \
> rm -rf /opt/*
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/1) Upgrading .php-build-dependencies (20201205.073029 -> 20201205.074158)
OK: 558 MiB in 240 packages
fetch https://dl-3.alpinelinux.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
OK: 558 MiB in 240 packages
mv: cannot stat '/usr/bin/gnu-iconv': No such file or directory
mv: cannot stat '/usr/include/gnu-libiconv/*.h': No such file or directory
--2020-12-05 01:41:59--  https://secure.php.net/distributions/php-7.3.25.tar.gz
Resolving secure.php.net (secure.php.net)... 185.85.0.29
Connecting to secure.php.net (secure.php.net)|185.85.0.29|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.php.net/distributions/php-7.3.25.tar.gz [following]
--2020-12-05 01:42:00--  https://www.php.net/distributions/php-7.3.25.tar.gz
Resolving www.php.net (www.php.net)... 185.85.0.29
Connecting to www.php.net (www.php.net)|185.85.0.29|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19675266 (19M) [application/octet-stream]
Saving to: 'php-7.3.25.tar.gz.1'

php-7.3.25.tar.gz.1               100%[=============================================================>]  18.76M  9.79MB/s    in 1.9s    

2020-12-05 01:42:03 (9.79 MB/s) - 'php-7.3.25.tar.gz.1' saved [19675266/19675266]

Configuring for:
PHP Api Version:         20180731
Zend Module Api No:      20180731
Zend Extension Api No:   320180731
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for a sed that does not truncate output... /bin/sed
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking how to run the C preprocessor... cc -E
checking for icc... no
checking for suncc... no
checking whether cc understands -c and -o together... yes
checking for system library directory... lib
checking if compiler supports -R... no
checking if compiler supports -Wl,-rpath,... yes
checking build system type... x86_64-pc-linux-musl
checking host system type... x86_64-pc-linux-musl
checking target system type... x86_64-pc-linux-musl
checking for PHP prefix... /usr
checking for PHP includes... -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib
checking for PHP extension directory... /usr/lib/php7/modules
checking for PHP installed headers prefix... /usr/include/php7
checking if debug is enabled... no
checking if zts is enabled... no
checking for re2c... no
configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers.
checking for gawk... no
checking for nawk... no
checking for awk... awk
checking if awk is broken... no
checking for iconv support... yes, shared
checking for libiconv in -liconv... yes
checking if iconv is glibc's... no
checking if using GNU libiconv... no
checking if iconv is Konstantin Chuguev's... no
checking if using IBM iconv... no
checking if iconv supports errno... yes
checking if iconv supports //IGNORE... no
checking if your cpp allows macro usage in include lines... yes
checking for ld used by cc... /usr/x86_64-alpine-linux-musl/bin/ld
checking if the linker (/usr/x86_64-alpine-linux-musl/bin/ld) is GNU ld... yes
checking for /usr/x86_64-alpine-linux-musl/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking whether ln -s works... yes
checking how to recognize dependent libraries... pass_all
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking the maximum length of command line arguments... 98304
checking command to parse /usr/bin/nm -B output from cc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if cc supports -fno-rtti -fno-exceptions... no
checking for cc option to produce PIC... -fPIC
checking if cc PIC flag -fPIC works... yes
checking if cc static flag -static works... yes
checking if cc supports -c -o file.o... yes
checking whether the cc linker (/usr/x86_64-alpine-linux-musl/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no

creating libtool
appending configuration tag "CXX" to libtool
configure: creating ./config.status
config.status: creating config.h
config.status: config.h is unchanged
/bin/sh /opt/php-7.3.25/ext/iconv/libtool --mode=compile cc -I"/usr/include" -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -I. -I/opt/php-7.3.25/ext/iconv -DPHP_ATOM_INC -I/opt/php-7.3.25/ext/iconv/include -I/opt/php-7.3.25/ext/iconv/main -I/opt/php-7.3.25/ext/iconv -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib  -DHAVE_CONFIG_H  -I/usr/include -g -O2   -c /opt/php-7.3.25/ext/iconv/iconv.c -o iconv.lo 
mkdir .libs
 cc -I/usr/include -DZEND_ENABLE_STATIC_TSRMLS_CACHE=1 -I. -I/opt/php-7.3.25/ext/iconv -DPHP_ATOM_INC -I/opt/php-7.3.25/ext/iconv/include -I/opt/php-7.3.25/ext/iconv/main -I/opt/php-7.3.25/ext/iconv -I/usr/include/php7 -I/usr/include/php7/main -I/usr/include/php7/TSRM -I/usr/include/php7/Zend -I/usr/include/php7/ext -I/usr/include/php7/ext/date/lib -DHAVE_CONFIG_H -I/usr/include -g -O2 -c /opt/php-7.3.25/ext/iconv/iconv.c  -fPIC -DPIC -o .libs/iconv.o
/opt/php-7.3.25/ext/iconv/iconv.c: In function 'zm_startup_miconv':
/opt/php-7.3.25/ext/iconv/iconv.c:287:4: error: '_libiconv_version' undeclared (first use in this function)
  287 |    _libiconv_version >> 8, _libiconv_version & 0xff);
      |    ^~~~~~~~~~~~~~~~~
/opt/php-7.3.25/ext/iconv/iconv.c:287:4: note: each undeclared identifier is reported only once for each function it appears in
/opt/php-7.3.25/ext/iconv/iconv.c: In function '_php_iconv_appendl':
/opt/php-7.3.25/ext/iconv/iconv.c:184:15: warning: implicit declaration of function 'libiconv'; did you mean 'iconv'? [-Wimplicit-function-declaration]
  184 | #define iconv libiconv
      |               ^~~~~~~~
/opt/php-7.3.25/ext/iconv/iconv.c:468:8: note: in expansion of macro 'iconv'
  468 |    if (iconv(cd, (char **)&in_p, &in_left, (char **) &out_p, &out_left) == (size_t)-1) {
      |        ^~~~~
make: *** [Makefile:194: iconv.lo] Error 1
root@3e69a0cc162f:/opt/php-7.3.25/ext/iconv# 

any feedback/help is greatly appreciated.

Can you try force update the docker again and try using mail without my fix (might the new update fixed it and fix is not needed?) 
If not working try then to apply the fix again and see if the error comes up again. I just checked for updates but it says up to date. And i dont have this error..

Link to comment
On 12/2/2020 at 11:16 AM, azzilla said:

I have also followed Spaceinvader One's guide and having sort of the same issue and i can't understand why. Copy within the docker console works as expected, 5gb file copies in 8-10 seconds.

 

I am getting slow uploads only (10MB/s), downloads are fine (90-120MB/s). There are som topics concering this issue over at Nextclouds forum but I am not getting anywhere with the solutions suggested over there.

 

https://help.nextcloud.com/t/slow-upload-speed-i-need-advice/60909/14

 

I can't remember this being an issue looking back on using Nextcloud for almost 2 years. I had some trouble uploading large files and had to make some edits to the .conf file for swag to fix that.

Other from that, nothing has changed.

 

So, i did some testing, and looks like the nextcloud instance is "locked" at 20Mbps max. 
I tried to upload from desktop client a new file (2gb) and the max speed i could see on task manager was 20Mbps. At the same time i tried to upload from the Web, another 2gb file (different), speed was bouncing from 0-20mbps on both streams, but always the total was 20mbps.
Then i tried to upload from a different device, (maybe was limit per ip?) When the upload started, the speed is again split. but the total was never more than 20mbps.

All this tests done in the same lan through the reverse proxy. 

I dont have a solution yet. but this might help some people look for a solution!
 

So, the main problem here is the reverse proxy.
When i tried uploading the same from web by using the container ip instead of the domain name. It is uploading at 100-200Mbps (which is again slow considering the 1Gbps ethernet link) but more reasonable.

This could be fixed for internal transfers with a NAT rule maybe?

Edited by skois
Link to comment
12 hours ago, skois said:

Can you try force update the docker again and try using mail without my fix (might the new update fixed it and fix is not needed?) 
If not working try then to apply the fix again and see if the error comes up again. I just checked for updates but it says up to date. And i dont have this error..

Not sure what happened but it works now.  I left it broken last night and after reading your reply was getting ready to try again but....  no need.  I ran your script twice last night with the same results, but I didn't force update the docker.  I appreciate your quick response, your knowledge is appreciated.  Thanks again.

Edited by NVrEnough
grammar
  • Like 1
Link to comment

I don't get this. I tried plenty of settings and tweaks without trying to nag in this forum but now i am at a loss for ideas.

The performance of uploading files through the webgui is soooooo slow. Around 30MB/s which i deem slow on a gigabit LAN and WAN.

I totally understand that NC isn't perfect ootb in the sense that i needs some tweaking to get the best performance, but i tried a enough of the solutions without significant results that i am close to surrender my attempts.

 

Share is cache enabled and according to "Diskspeed" i have no issues with read/write. Similar share settings for my "Downloads" share and speeds in nzbget is around 100MB/s.

 

Has anyone managed to make any changes or tweaks that performs better than 30MB/s or have any suggestions as to what i should focus on?

It's not as if i can't live with 30MB/s via the gui, but if higher speeds are possible i would really like to know.

 

What i've tried.

Using Postgresql instead of MariaDB. (Marginally improved CPU load when heavy use)

Enable OPcache (No measurable difference)

Tuned php-fpm (No measurable difference)

 

What i haven't tried

Redis backend

 

NC Documentation links

Link 1

Link 2

 

Linuxbots

Link 1

 

Link to comment
On 11/19/2020 at 11:29 PM, Mystic said:

@skois - I configured the client and it corrected the uploaded files in seconds.  I am going to attempt to upload as much as I can prior to maxing out my local drive.  I guess I can development my tree top level folders and sub folders one by one....  A long process but maybe more reliable than the WEB.

 

I will let you know how things go,  Keep me posted with your progress....

 

No worries on time...  a day a week, I am just happy someone more clever then me is on the case :)

Ok, i made some edits that improved a little the "Error when assembling chunks".
Edited the file in appdata/nextcloud/php/www2.conf adding the following UNDER the [www]

; Tune PHP-FPM https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#tune-php-fpm

pm = dynamic
pm.max_children = 150
pm.start_servers = 15
pm.min_spare_servers = 10
pm.max_spare_servers = 30

Also on appdata/nextcloud/php/php-local.ini i have the following 

date.timezone = Europe/Athens
upload_max_filesize=16G
memory_limit=8G
max_execution_time=7200
max_input_time=7200
post_max_size=16G
max_file_uploads = 200
default_socket_timeout = 7200


; Enable PHP OPcache https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#enable-php-opcache
opcache.enable=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1


Note that some of the above options are redundant. But i decided to group the together so i know what's going on.

Tried to upload a 1,5GB file and completed fine (before i was getting the error). Trying now a 4GB file. Will edit soon to let you know how it goes.

 

Aaaaand i got the error again :D
 

Edited by skois
Link to comment

I need some help with networking, i have NC behind reverser proxy (NginxProxyManager). When i access the cloud.mydomain.com from the local network and transfer files, the speed is awful because the file has to go public and come back here.

How i can setup my pfsense to route the cloud.mydomain.com to my local NC address. And i do that. Should i also change the config to NOT overwrite host to cloud.mydomain.com?

 

Link to comment

Hey guys!

 

Little question: Im using the Password App in NC20 and it tells me that it need some higher php version above 7.4. How do I check for the php version and upgrade?

 

Edit:

 

With the todays update to the LSIO Docker Image, the problem for not getting the Code Server of Collabora-Online installed is suddenly gone. Now, it says, it cannot connect:

 

"Es konnte keine Verbindung zum Collabora Online-Server hergestellt werden. Dies könnte auf eine fehlende Konfiguration Deines Web-Servers zurückzuführen sein. Für weitere Informationen besuche bitte:Collabora Online mit einem Klick mit Nginx verbinden"

 

Whicht means in english:

 

It was not possible to establish a connection to the Collabora Online-Server. This could be due to a misconfiguration of your web-server. For further information visit: Connect Collabora Online with just one click to Nginx"

 

And additionally, there is some odd behaviour with my Collabora installation. I cannot install the integrated Code Server. When I type 

 

sudo -u abc php -d memory_limit=512M occ app:install richdocumentscode

 

nothings gonna happen. The installation in the apps section isnt working either...

 

 

Thanks in advance friends!

Edited by hundsboog
Update
Link to comment
17 hours ago, skois said:

Ok, i made some edits that improved a little the "Error when assembling chunks".
Edited the file in appdata/nextcloud/php/www2.conf adding the following UNDER the [www]


; Tune PHP-FPM https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#tune-php-fpm

pm = dynamic
pm.max_children = 150
pm.start_servers = 15
pm.min_spare_servers = 10
pm.max_spare_servers = 30

Also on appdata/nextcloud/php/php-local.ini i have the following 


date.timezone = Europe/Athens
upload_max_filesize=16G
memory_limit=8G
max_execution_time=7200
max_input_time=7200
post_max_size=16G
max_file_uploads = 200
default_socket_timeout = 7200


; Enable PHP OPcache https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#enable-php-opcache
opcache.enable=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1


Note that some of the above options are redundant. But i decided to group the together so i know what's going on.

Tried to upload a 1,5GB file and completed fine (before i was getting the error). Trying now a 4GB file. Will edit soon to let you know how it goes.

 

Aaaaand i got the error again :D
 

Have you tried to disable chunking completely? That made a huge change for me.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.