RAINMAN

Members
  • Posts

    179
  • Joined

  • Last visited

Everything posted by RAINMAN

  1. To update anything in the grafana.ini you can map a volume to it, or you can use more environment variables. See http://docs.grafana.org/installation/configuration/, basically you use the format of GF_<SectionName>_<KeyName> = some value, just like you did for the plugins. All good Atribe. Thanks for putting out this docker. I've been playing with it quite a bit getting some cool graphs. I made a shell script for my DDWRT router and a powershell script for my windows machines so far.
  2. After looking at this closer, you can create environment variables to have it install the plugins using CLI by setting an environment variable key to GF_INSTALL_PLUGINS and setting the value to the name of the plugin you want to install. Example value for 2 plugins:
  3. Something seems wrong with the grafana docker. The plugins directory doesnt seem to work if i drop files in there. If I attach to the docker and run the CLI command it adds the plugin properly but it doesnt show in the plugins directory externally but does within the docker, yet the mapping seems to be right When i update the docker it destroys the plugins as expected since the mapping seems broken. Second, how do you update the settings. (grafana.ini) This is not exposed??
  4. Reading the various links, it's a php bug beyond our control. Nextcloud should take the lead on this, I don't think workarounds like that are something we should be looking at as a rule. Ya I agree with that. I was doing a bit more digging and owncloud is working using LDAP in the same AD environment and the main difference I see is related to a iconv problem I think. In the alpine image the php extention for inconv is undefined. iconv iconv support => enabled iconv implementation => unknown iconv library version => unknown Directive => Local Value => Master Value iconv.input_encoding => no value => no value iconv.internal_encoding => no value => no value iconv.output_encoding => no value => no value The implementation is unknown. From what I can find the way to resolve this is to use the libiconv extension instead. This seems to be an issue on Alpine. Is it possible to build using this instead of iconv? no, because it's not just an alpine issue, the bug is in many implementations of php from 2008 onwards and including php7. it is up to nextcloud to resolve this, if it were "fixed" and nextcloud then made neccessary changes, the "fix" would have been a waste of time and effort. Ok. Whatever, I just updated my owncloud docker to nextcloud and its working fine so i'll just use that. I was just reporting a bug but if no one wants to look into it that's fine. Its not php as its the same version in both my dockers and its not owncloud as I have it running in another docker now fine. THe only difference is the base distro and the iconv module showing up as unknown instead of 2.19 as it does in the other docker.
  5. Reading the various links, it's a php bug beyond our control. Nextcloud should take the lead on this, I don't think workarounds like that are something we should be looking at as a rule. Ya I agree with that. I was doing a bit more digging and owncloud is working using LDAP in the same AD environment and the main difference I see is related to a iconv problem I think. In the alpine image the php extention for inconv is undefined. iconv iconv support => enabled iconv implementation => unknown iconv library version => unknown Directive => Local Value => Master Value iconv.input_encoding => no value => no value iconv.internal_encoding => no value => no value iconv.output_encoding => no value => no value The implementation is unknown. From what I can find the way to resolve this is to use the libiconv extension instead. This seems to be an issue on Alpine. Is it possible to build using this instead of iconv? Edit: Both dockers are running PHP 5.6.24 so I dont think its a pure PHP issue.
  6. Getting an iconv error using LDAP module. There is a bug report on the nextcloud website (https://github.com/nextcloud/server/issues/272) but it points to a PHP bug/a specific way to set docker. I dont know if this information (https://github.com/docker-library/php/issues/240) can be implemented in our docker so it works properly?
  7. I setup LDAP integration with my domain. I logged out and was logging in with a domain user and now I am getting a 500 error. It also seems to have rebooted the docker when I attempted the login. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We do accept donations at: https://www.linuxserver.io/donations ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 40-config: executing... [cont-init.d] 40-config: exited 0. [cont-init.d] 50-install: executing... [cont-init.d] 50-install: exited 0. [cont-init.d] 60-memcache: executing... [cont-init.d] 60-memcache: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. I'm getting all of these PHP errors in the log file 2016/08/07 09:59:57 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:00:04 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:00:23 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:00:48 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:01:30 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:01:32 [error] 288#0: *284 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:01:54 [error] 290#0: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:01:57 [error] 290#0: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:02:03 [error] 290#0: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" 2016/08/07 10:03:33 [error] 290#0: *6 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function getFileInfo() on null in /config/www/nextcloud/lib/private/files/filesystem.php on line 874" while reading response header from upstream, client: 192.168.254.1, server: _, request: "GET /index.php/apps/files/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.stevensononthe.net" I'm not sure how to resolve this now as the page just gives me a 500 error. Edit: If I clear the browser cache I can see the login page again. Edit2: After clearing the cache its working fine. Seems a non-issue now. Move along. hehe
  8. When you open the integrity control page it will start every 3 minutes a background job which is used to update the status of build and export of each disk. This is unrelated to the find command. You can check if the background job has issues by running it in the CLI: /etc/cron.daily/exportrotate -q Any suggestions why 2 of my disks need to rebuilt every day? Possibly I have to many files for innotify? I have a few million files (due to plex app data daily backups) I am pretty sure I figured this out with a lot of testing. I'm running rsnapshot sync each night to backup my appdata folder to my array. This is not getting picked up by the plugin. Why though. I am not sure. This is run directly on unraid via cron. Its not within a plugin. Maybe that's it? I'm going to try excluding this folder tonight and see if it fixed the unbuilt disk. This solved it. It seems that anything transfered with rsnapshot sync is not getting picked up by the plugin.
  9. When you open the integrity control page it will start every 3 minutes a background job which is used to update the status of build and export of each disk. This is unrelated to the find command. You can check if the background job has issues by running it in the CLI: /etc/cron.daily/exportrotate -q Any suggestions why 2 of my disks need to rebuilt every day? Possibly I have to many files for innotify? I have a few million files (due to plex app data daily backups) I am pretty sure I figured this out with a lot of testing. I'm running rsnapshot sync each night to backup my appdata folder to my array. This is not getting picked up by the plugin. Why though. I am not sure. This is run directly on unraid via cron. Its not within a plugin. Maybe that's it? I'm going to try excluding this folder tonight and see if it fixed the unbuilt disk.
  10. When you open the integrity control page it will start every 3 minutes a background job which is used to update the status of build and export of each disk. This is unrelated to the find command. You can check if the background job has issues by running it in the CLI: /etc/cron.daily/exportrotate -q After some testing this was exactly it. I had the page open so it was automatically refreshing every 3 mins. Great to know. Any suggestions why 2 of my disks need to rebuilt every day? Possibly I have to many files for innotify? I have a few million files (due to plex app data daily backups)
  11. The hashes are stored in the extended attributes of a file, and should stay up to date automatically when protection is enabled. Export to a file is optional and a manual action, it does not affect the working of the plugin. The status shows when an existing export file gets outdated, but the user decides when to update. The disk verification tasks table defines how and when verification of disks take place. See the online help for more information. I suppose the name of the folder/file isn't really **PRIVATE** (asterisks in a folder/file name are not allowed). The message "no export of file" happens when no hash key value is stored in the extended attributes of the file. The usual approach is to rebuild. Sure you are using XFS as file system on disk 6 or have you changed the hashing method from one to another midway? All my disks are XFS. I haven't changed the hashing method. I can rebuild the hashes on this disk and see if that fixes it. My main concern now is the FIND feature. This is using a significant portion of my CPU when I am not wanting to do anything with finding duplicates.
  12. Im not sure this is working right for me. I have 6 disks and built and exported all of them. A few days later i noticed disk 5 or 6 (i forget which) was not up to date on the build and export. I recreated them. fine. Now last night I had a scheduled check at 3am. It started disk 3 and 4 only. Why not 1 and 2 and 5 and 6? And then i also noticed that 5 and 6 are not up to date again. What gives? I thought this is supposed to keep itself up to date as files are copied to the array. Anyone help me get this setup right? or what can i check to see why its not working. Also, when I do an export of one that the build is up to date but the export is not I get a lot of error messages: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** Mar 30 16:16:28 FILESERVER bunker: error: no export of file: /mnt/disk6/TV/**PRIVATE** One more thing. Now that I updated the plugin, the find command keeps spiking my CPU usage for 2 -3 mins each 2-3 mins. What is this doing? After removing the plugin this stopped. Installing it again creates the same behaviour. Any way to go back to a version without the find feature?
  13. Sorry I missed this request earlier. Currently when notifications are enabled in the settings of file integrity, it will only produce a message when something unusual is found, in other words: checks without any corruptions (normal condition) are not reported. I can add a log entry + notification telling the start/stop times. Please do. It was supposed to run last night and as far as I can tell it didn't run but there is nothing in the log to indicate anything either way.
  14. Also seems like my issue. I thought it was to do with VMs but I suppose it would be anything. http://lime-technology.com/forum/index.php?topic=46904.msg448431#msg448431
  15. I managed to solve this myself after some deep diving into how samba and active directory work. Basically, none of the group policies set in server 2012 will affect the linux box. I removed all these that I was trying. What needs to be done is enable guest access via the samba configuration. In console I added nano /boot/config/smb-extra.conf map to guest = Bad User usershare allow guests = yes guest ok = yes guest account = user Restart samba /etc/rc.d/rc.samba restart When setting permissions for each folder adjust "Everyone" if you want guests to access or not. Its a bit annoying that I can't just add the "Guest" account to the groups I created but this is functional at least. I also noticed that this affects the top level share but all the files and folders within a share have the owner of nobody so if I give them read only access to the top level share they get fill access to all files below. After I finish setting top level permissions I will have to change all the ownership permissions of all files/folders to my domain admin. I would have thought unraid would have set that when it joined the domain?? Is this a bug it didn't change these permissions from nobody to what I set in "AD initial owner"? I'm not sure this is the best process but anyone have any suggestions for a better/easier way? Edit: I changed guest account = user from guest account = nobody because nobody already had RW permissions on all files and I couldn't find an easy way to remove this. The user account had RO permissions only from when I used it before active directory.
  16. Thanks for your help. I'm sure others have done it. I can't be the only one that wants non-domain users connecting to their unraid shared folders.
  17. I had manually entered it in my DNS server. I have no problems resolving fileserver -> 192.168.254.3
  18. I tried creating a Group Policy for the fileserver then rebooted the fileserver but didnt make any difference that I can see.
  19. I added unraid to my domain to handle permissions. I some of the groups I created had the user "Guest" in them. I thought this would give users who are not part of the domain read only access but as soon as I type in \\tower I get a prompt to enter a username/password. if I type anything in there that is not part of the domain it doesn't allow a connection. If I type a domain account it works fine. I was hoping I could grant non-domain users limited access. Is this possible? Here is an example of my permissions. This is what happens if I try and access that folder with a PC that is not on the domain. If I enter DOMAIN\user it works. But thats besides the point because i want all guests to the network to access some of the folders.
  20. If I've been reading correctly these boards will not work with these cpus because they're v1. Am I incorrect? The blurb on their site only mentions v2, but v1 and v2 chips share the same socket. Assuming I have the model number correct, Intel lists the e5-2670v1 as compatible with this board http://ark.intel.com/products/56333/#@compatibility EDIT: I tried their shopping cart, and it doesn't look like they ship outside of the US... Not sure if a manual order will be different... I contacted them earlier. They will ship to Canada but its not cheap. USPS was like 65USD.
  21. I am trying to setup a new server 2012 VM on my unraid box and since then I keep getting complete networking dropouts. Everything from the shares to the webgui to the dockers and VMs I cannot communicate with. This is happening multiple times daily. Prior to trying to setup the VM I had no issues at all. I hooked a physical monitor and keyboard to it to grab the syslog. I cant see anything in it to indicate a problem. This is when it dropped out and then I attached a USB keyboard. Prior to this nothing in the log. I then did some checking around at the interfaces and didnt see anything outstanding. in fact, best I can tell it was all exactly the same as it is after I restarted networking. Restarting networking fixed the issue but I really dont know the root cause. Anyone know of something i can check? Edit: I remove the nerd pack and a few other plugins. I also noticed this happens most often when i am using the server manager from with windows 10 PC connecting to the Vm. I dont know if that has anything to do with it as it is not *every time*. Edit2: Did it again so the plugins I removed had no effect. This time it stalled out trying to add the VM to my domain. fileserver-diagnostics-20160225-1456.zip
  22. Ahh yes, disabling that did clear it up. Now it idles about 2-5%. Thanks for the help.
  23. I just noticed that I have an off occurrence of a find command spiking my CPU every few seconds. I turned off all my dockers (except cadvisor) and all my drives are idle except the cache drive yet the CPU keeps spiking. Its not the end of the world but i was hoping I could have it idle a lot lower then it is. If I take a day average the system is using 18% CPU. I would expect a lot lower on idle? Am I wrong in thinking this? What is causing this find command to keep running? Any suggestions?