robertoal

Members
  • Posts

    83
  • Joined

  • Last visited

Everything posted by robertoal

  1. The following error occurs: when pressing (from the dropdown menu on a docker icon in the dashboard) Sart, stop or restart. So using dockers became very difficult. The same goes for the docker menu (it seems to use same dropdowns) AND i cannot delete the docker containers anymore because the 'yes, delete it' button doesn't work anymore. Can someone help me with this very annoying problem? Background info: - This behavior began on an arbitrary moment - It happened with 6.8.3 stable - Updated it to 6.9.0 beta25 - same problem - Used chrome, firefox, edge, internet explorer - same problem
  2. Internal (if you have a home network) you should use http: enabling https is more hassle. From the internet you want https, for this use the NginxProxyManager docker. (reverse proxy)
  3. You sir are a king! Thank you VERY much. It works like a charm! @Abigel If you change the portmapping from standard you have to manually add the port into browser: x.x.x.x:NEW_PORT Or: If you go to edit docker, and turn on advanced view in the top right corner you can change the WebUI text box to: http://[IP]:[PORT:80]/ (change the 80 with the non ssl port, or: https://[IP]:[PORT:443] (change the 443 with the ssl port) This enables the right click go to webui again.
  4. Absolutely sure! Can you please advise me on this? I have some bad experiences with changing rights of the nextcloud folder.. The last times I tried stuff like this nextcloud gave me the 'rights' error and after that the internal server error lol. So the rights at this moment are in the docker container: drwxrwx--- 1 www-data root 246 Mar 29 13:31 data From the unraid terminal: drwxrwx--- 1 sshd root 246 Mar 29 13:31 data/ If I get this correctly: the root folder on unraid itself (data in this case) does not inherit the rights given IN the docker container? So: if I add the users: 'nobody' and 'users' which is the same as the other shares: drwxrwxrwx 1 nobody users 0 Mar 14 15:23 SHARE_NAME/ It should work? Or do I need to change the rights in the docker container as well?
  5. Yes it is x64, and it should work but it doesn't (I did install the apps from the start) no matter now: it works with the external onlyoffice docker. My path are set as follows: I have acces to /mnt/user/nextcloud. (and can create folders/files, so the share is working) I don't have acces to /mnt/user/nextcloud/data -> the error is: I do not have permission to acces the folder.
  6. @knex666 I had, as mentioned some time ago, the exact same problem; solved it with an external onlyoffice docker. Reinstalling for me did nothing! Very strange is it not? Have a follow op question: I can't acces the mounted data folder from a windows machine (the mounted folder is a share on the unraid server). I thought this had to do with: ExtraParams: --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 PostArgs: && docker exec -u 0 NAME_OF_THIS_CONTAINER /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' So I added these to the specific textboxes, but no dice unfortanatly. Can you help me debug this problem? I really appreciate the time you invest in this thread!!
  7. Hi there, I'm positive i'm running your image! But it just didn't work from the start.. now idea what's going on. But: does work now though. Good idea about mounting the php.ini! will do this. Lastly: now getting talk to work reliably! Thanks for your help.
  8. THank you for this quick reply! With me the onlyoffice didn't work out of the box? Which addres should I use in the nextcloud: onlyoffice settings then? Finally: Where can I change the php settings? Memory is limited now to 512... Very much appreciated all the work you put into this!!! edit: I think i found it by: php --ini in the console: /usr/local/etc/php/conf.d/memory-limit.ini Changed it there and seems to work so far!
  9. Found out some limitations.. When not using bridge, but an specific IP addres for the docker i get this MariaDB error: Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [2002] No such file or directory Solution: just use bridge ande forwarde the port like stated on the first page, but use port 80 for container port instead of 8080. There seems to be problem to docker to docker communication when using a specific IP addres (any docker to any docker, not just nextcloud) When trying to edit the config.php file from a samba share in windows the file appears to be empty. It isnt: just ssh to the unraid box and edit it there. Question: What is the right port for the internal openoffice installation? I now use an extra docker for this..
  10. @testdasi Thank you for thinking with me here. 1. I'm not really good in programming in general, although I have some experience with Python. I hope I have the time to make a project of this ! 2. This is a really good idea, but.. it is almost the same as using the cache as a temp drive isnt it? So the user has to manually copy the files to another directory when he/she is done with the project. It is not the end of the world but again: FiFo looks like a very logical solution for this use case. (and for the limetech team probably quite easy to implement) 3. I have a 4*800gb ssd cache array at the moment; trimming in a cache array IS permitted isnt it?
  11. A little update: I haven't been able to script the FiFo concept yet unfortunately; i'm simply not experienced enough with bash 🙂 So I will leave this feature to be added (hopefully) in the feature. Thanks nuhll for baring with me, and for the solution you mention. Thing is with all these soltions: I want full transparancy for the users. THey don't need to think about where to move which files, so now I use the ' move above 80% usage' scheme; although nog optimal it is workable. Is it really that critical if your photos get read from non cache? I mean, they wont be 100s of GB? Yes: the .nef arent' that large by itself: 40/50 mb but especially with multiple users selecting photo's for the final selection a HDD is very noticeable. And: I don't wan't to spin up the drives all the time, with the right caching scheme (FiFo) they should only spin once a week or something like that which is good for longevity.
  12. Sorry didn't read that bit.. (i think he edited his post) find /boot/ -type f -name mover doesnt seem to find a mover script.. any idea where the new one is located?
  13. Yes, which is why rsync copies the files from the cache to /mnt/user0, en subsequently removes files from the cache drive? Everything located in /mnt/user0 will be shown in /mnt/user, correct? See this post:
  14. Again all of you thanks for the replies! @primeval_god You are right: but again: the mover moves ALL files, not just the oldest ones, correct? I searched the forum with FIFO caching, caching, data based caching and so forth. I will try search for it again but with the script tag added 🙂 @nuhll I don't think i explained my question well enough. As you say the mover can be invoked at a set percentage or a set time. But then it moves ->ALL<- of the files from the cache drive to the array, correct? So this means that the moment the 50% (as stated in your example) disk usages hits anybody working on a project has there projects moved to the slow array: this is the opposite of what I want! Right now I have the standard raid10 mode with my cache ssd's (4 of them). Just the other day the wrong 2 went bad, which means I last all my vm's, docker etc. Especially on critical projects (most of the projects are weddings so thats pretty much always) i want to be on the safe side.. That seems so overcomplicated and useless -> So no, I don't think it is. Or maybe i'm still missing something here? The only extra thing I want the mover to do essentially is: move everything based on date until threshold is reached? @trurl Indeed: I discovered this: it is why there is a /mnt/user0 drive. the /mnt/user/$Share folders aggregrates everything from cache en array drives I think?
  15. OK. Found the mover script. First thing to change should be a check if a threshold value has been hit (disk usage percentage): # Only run script when cache disk usage threshold is exceeded threshold=(75) diskusage=$(df -hl | grep '/mnt/cache' | awk '{ print $5 " " $1 }'| cut -f1 -d '%') if [[ "$diskusage" -lt "$threshold" ]]; then exit 0 fi In the mover loop, this command: find "./$Share" -depth \( \( -type f ! -exec fuser -s {} \; \) -o \( -type d -empty \) \) -print \ \( -exec rsync -i -dIWRpEAXogt --numeric-ids --inplace {} /mnt/user0/ \; -delete \) -o \( -type f -exec rm -f /mnt/user0/{} \; \) Has to be changed somehow to reflect the output of this command: find /mnt/cache/$Share -type f -print0 | xargs -0 ls -tr Which lists all files, beginning with the oldest one, and subsequently moves them with the rsync command. After each file there has to be a check if disk usage percentage has been reached, 75% in this case: (per file bases because some files are like 80gb) or if this is to slow maybe it is possible to have a counter after which we check. diskusage=$(df -hl | grep '/mnt/cache' | awk '{ print $5 " " $1 }'| cut -f1 -d '%') if [ $diskusage -lt $threshold ]; then break fi Any input in how to to change the find / rsync command? Quite new to bash.. so.. Last thing to change is an rsync command to sync the cache to a HDD share for extra duplication of files. (Just now I had 2 ssd drives fail in a 4 drive cache pool.. I sync the pool every day, so nothing lost there but feels a bit safer this way.) Am I on the right track?
  16. First of all: thank you both for the reply's. @Nuhll I now about de the caching options, but the problem as stated is that the entire share will be located on the cache drive.. I ONLY wan't the newest projects to be on there. @jonathanm Yes I know it doesn't work like that; but it sounds strange to me that it doesnt, this isn't such a strange use case is it? I think freenas and a couple of other solutions do offer this? There are a lot of use cases where this can be benificial, especially the project based work loads like photography or videography. I love unraid for everything else, and do not like to switch; so how can I solve this problem? I tried: - Syncing with nextcloud (but also other syncing software) (so the cache would be on the PC itself) but this is not the ideal solution. Especially for video projects. (not every pc has enough space for a project like that) - Making a 'work' share on the cache drive. But this brings with it a lot of hassle; people have to remember coping the contents to the archive directory, and if they do it takes a lot of time; so they have to ask me for a command line copy. So the most obvious way would be to adapt the mover script, so instead of moving the entirety of the cache disk it only moves on a per directory bases, and it tries to leave an x amount of space free: Max size of a project is 1tb. So 1tb has to be reserved for when a new project is copied onto the cache drive (which for the user is just the folder they want to put the project in) 1. 4tb cache pool; 3tb is taken with 'older' projects. User copies new files. 2. Mover is invoked; it checks to see if the 1tb threshold has been hit (it has in this case). 3. Mover moves only the amount of data needed to get under the 1tb threshold to the HDD's. FIFO style (Oldest directories first) Is this something that is doable? Some pointers on how I can ahieve this? Love to have some input!
  17. The server is used with projects from around 60gb a piece (Photo's). What I want: The photographer copies the XQD cards from the pc to the unraid server. Once there, unraid needs to do 3 things: 1. In the background copy the photo's to the HDD 2. If the cache drive is full: remove oldest project (It will still be on the HDD off course, only the cached project will be deleted) 3. Keep the newly created photo's on the cache drive until it gets pushed out by newer projects (FIFO) I thinks this is called FIFO caching: how can I achieve this? Thanks in advance!
  18. FINALLY! Progress: QPI (QuickPathInterconnect) seemed part of the problem. Settings I used: Memory snoop -> Early Snoop linkop/linkos/mesagen -> Disabled And I updated to the latest virtio drivers. Still not perfect.. So: anybody? Something more I can look at? I will post my BIOS settings for other people to learn from.
  19. Some other things I tried: 16. Disabled NUMA all together 17. Disabled Power Management all together 18. Disable Hyperthreading These steps combined made it even worse. Now the mouse is lagging almost the entire time. Even keyboard presses don't register real time. So... up. Anybody?
  20. System: Unraid 6.5.1 dual E5-2670 xeon 32g memory (non-ecc) asus z9pa-d8 motherboard 970 and 660Ti 2x 4TB and 1x 1TB Spinners for Array 2tb for Movies an TV-Shows 1tb hot spare (will be a 4tb one) 120gb SSD for appdata and download cache 512gb Samsung nvme SSD for cache (and storage for the VM's) Use case: Because i didn't really need this workstation any more I figured: why not make it a server? Dockers: Plex, Crashplan Pro, Sonarr, transmission, etc. VM's: 2 vm's for gaming and I want it to run Lightroom and some Premiere Pro. Problems on the 970 VM: 1. Massive latency - which I can feel with moving the mouse, and typing on the keyboard. 2. GPU not functioning properly: if you move a window you can see the screen building up block by block. As if the GPU can't keep up 3. Stuttering audio with dropouts (probably related to #1) 4. Disk speed only half of rated speed (NVME Samsung cache drive) 5. Programs startup are stupid slow Solutions I tried on the 970 VM: 1. Because of the USB latency i did a PCI passtrough of the entire controller. This seemed to help with the mouse and keyboard being laggy. 2. Ethernet Latency (latencymon told me of this one) this is solved 'somewhat' with passing through one of the gigabit ethernet ports 3. Setting BIOS C states to disabled (no difference) 4. Setting all performance options ON in BIOS. (So no power saving) (no difference) 5. Pinning CPU cores - isolcpus=8-15,24-31 (Didn't seem to make ANY difference) 6. Set emulator cores (Didn't seem to make ANY difference) 7. Set everything to MSI instead of IRQ with the MSI tool (No difference) 8. Disable memory ballooning (no difference) 9. Did the: ' vfio-pci.ids=10de:13c2,10de:0fbb' trick in de Syslinux config file (No difference) 10. Tried different memory sizes (8g to 24g) (No difference) 11. Tried less cores (from 4 to 16) (No difference) 12. I only use 1 NUMA node, and with the correct core groups (Some difference. I assigned cores from both NUMA nodes and this made it worse) 13. I installed the newest Virt-io drivers (no difference) 14. Trying all this I only used this one VM. NO OTHER VM WAS ON WHILE TESTING 15. I used a usb DAC instead of the NVIDIA HDMI audio - stuttering and ticks are less but still there. Second VM: the VM with a 660ti. I don't now if this is related but when installing the drivers for the 660ti the ENTIRE system crashes. So... I'm on the end of my wits here. Is there anything else I could try? Maybe some mad BIOS setting I overlooked? Any help will be much appreciated! unraiddata-diagnostics-20180511-0958.zip Win10 met passtrough GPU-USB-ETHERNET.xml
  21. @swiguy : did you fix your problem? I have the same processor and I think chipset: only dual cpu. And having the same exact problem. MASSIVE latency with latencymon, especially on the GPU. Did the exact same optimizations as well. So: did you find a solution to your problem?
  22. Squid to the rescue! Crashplan backup the old one, so was a simple matter of restoring the libvirt image file. All is well again. Thanks again!
  23. So unfortunately the disk on which de vm's where installed crashed. Fortunately: i have a backup - but how can I import the old VM profile? The win10 VM needed very specific settings to boot and I can't really remember what those where. Thanks in advance.
  24. Ok. Somehow it was unable to find sdh.. on which the data was located. Thank god I make backups :-) Now I just removed al partitions and let it recreate them. Think it works now although it still says 'no balance found' Is this allright?