greg_gorrell

Members
  • Posts

    167
  • Joined

  • Last visited

About greg_gorrell

  • Birthday 07/02/1989

Converted

  • Gender
    Male
  • URL
    gstech.solutions
  • Location
    Bradford, PA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

greg_gorrell's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. No, pihole is on the same untagged, native 10.0.0.0/24 VLAN the as Unraid eth0 interface. There are no firewall rules or other networking issues at play either. I just tested on my second Unraid server, we will say with 10.0.0.3 IP. I created the VLAN bridges exactly as I did on the first server and I can successfully ping the Pihole. This leads me to believe that it isn't normal behavior and there is a configuration issue with the first Unraid box.
  2. Hey all, I was hoping someone would be able to explain some behavior that seems a little odd to me. I and currently on the latest production release of Unraid and this particular server runs a pfsense VM and an Manjaro VM. The pfsense VM is say 10.0.0.1 and has a dual port NIC passed through which is connected to my modem on WAN side and LAN side to a managed L3 Cisco switch. The other day, I created some VLANs on my network to segment some traffic like most do. In Unraid, I set up the VLANs as well, each having their own br0.vlan00 interface and I moved the dockers which are exposed to the internet on their own VLAN. I have a pihole running at say 10.0.0.80 which provides DNS for all of the network currently. Before creating the VLANs, my unraid server and all the docker containers would resolve DNS through the pihole. After creating the VLANs though, nothing on the Unraid box can reach the pihole. Keep in mind, although I have created VLANs, I have not moved the pihole yet and both Unraid and the pihole are on the same 10.0.0.0/24 LAN network, I have only added additional br0.vlan00 interfaces. Is there a reason that I am unable to even ping the pihole IP, either from the Unraid host at 10.0.0.2 or from the dockers or VMs utilizing br0?? After moving the Dockers to the "DMZ" VLAN, obviously a different subnet, they are able to resolve requests from the pihole and ping it as well. Perhaps this is more a Linux behavior than Unraid, but I have not encountered it before as this is my first foray into VLANs on a Linux box so could someone confirm this is typical? Thanks in advance!
  3. What a moron, I skimmed the whole thread before posting and I still missed that. Sorry guys! Edit: I wouldn't say it is "ignored," just not reflected in the GUI in my case.
  4. I began testing this build on my HP ML350 Gen8 in hopes the temperature values would be fixed in the GUI with the smartctl changes I noticed in the code. Unfortunately this hasn't done anything to resolve the problem of the default "Automatic" setting not pulling the SMART data (incorrect syntax error), and when set manually I still am not getting the temperature data on the Main or Dashboard tabs. I have also noticed a new problem occurring now in this build that wasn't on 6.8.x. If I go to an individual disk and set the SMART controller manually, after clicking "apply" and reloading the page, the SMART data will update and reflect the change but the GUI still shows "default." Just playing around some these settings, I have noticed it can be somewhat buggy as well. I have hit apply and refreshed the page on two occasions now only to have the settings revert back to the default. Perhaps someone could try to reproduce this error.
  5. Does anyone know why I would be unable to access Heimdall all of a sudden? It worked fine for a year now then over the past week since we had a power outage, I try to login and go to a 419 page informing me that my session expired. I just access this container inside my network via IP so no reverse proxying or DNS issues. I also tried deleting the keys and still had no luck. What might be causing this and how can i force a new session?
  6. Yes, I was getting all smart data when running manually. Looking at files in /var/local/emhttp/smart, it is clear that the underlying command hard-coded into the dynamix webgui is not running properly nor is it affected by what settings are entered for either each disk or globally. See my last post though, I explored the code a little bit last night and believe that this is something currently be resolved and would expect to see in next beta release. Lots of lines were removed in and only one added, and I am not even sure how to interpret it correctly: if (file_exists("$file") && exec("grep -Pom1 '^SMART.*: \K[A-Z]+' ".escapeshellarg($file)." |tr -d '\n' 2>/dev/null", $ssa) && in_array("$ssa",$failed)) { as referenced in this commit on July 12: https://github.com/limetech/webgui/commit/6f8507e5474e9b77fef836ee7379a1bee25a7a5b
  7. I just spent the whole evening trying to figure out what is going on here. After playing around int he Global Disk Settings, trying various SMART controller types and testing, I think I have some answers. I set the SMART controller to HP cciss globally and it wouldn't work. I tried a few others, eventually landing on "SAT," whatever the hell that may mean. To my surprise it returned all of the data I was looking for. Every one of the fields are populated and temperature data works, although like in the case of OP, it doesn't transfer to the Main page. As mentioned before, the Dynamix webGUI is pulling the data from the /var/local/emhttp/disks.ini file. You would think this information might be related to the data in the files contained in the /var/local/emhttp/smart/ directory which seem to query the SMART data from the disk, but it appears that the files in the smart folder are not connected in any way. In my case, the directory contains basic text file for each disk that contains what would be the smartctl output for that device, as well as a file of the same name with a .ssa extension. No matter what SMART controller I select in the disk settings, the information in these files does not seem to change and just reports the following: After doing some more searching, none of these files have anything to do with the Dashboard or Main pages. It seems that others have had issues with the Areca controllers in which it has been stated that the smart reporting on Dashboard and Main pages are hard-coded in emhttp and the parameters are not able to be defined by the user. I checked out the webgui code and it and I want to say they are currently working on a fix for this, as there was a commit last month removing the smartctl command from the monitor script. The extent of my PHP knowledge is reading a couple chapters of PHP in 24 hours 20 years ago when I was in middle school, so I could be full of bs here. I just spent way too much time into this last night when I could have been implementing a script to alert me to issues via another method. Hopefully some devs can chime in on what is going on or if anyone here is familiar with the codebase and wants to check it out. It definitely seems like something too simple to just leave not working properly, especially when thats kind of an important feature for this OS.
  8. I just picked up an ML350P Gen8 and am going through the same issue currently. I put the P420i controller into HBA mode after getting all the firmware up to date and just have been doing some testing before I try to migrate everything over. I have a hodgepodge of SATA drives installed, different brands sizes, etc. and haven't noticed any issues with the fan speeds since running Unraid so I am taking that as a good sign. Unraid lists no temps or SMART data for any of the drives, but I am able to retrieve it via the smartctl command as you mention. One of my ideas was to try to use iLO and set an alarm for this but of course there is no easy way to install the agent that reports this data into Unraid, although I am considering trying to convert the rpm into a txz if there is no easy way to obtain the data in the GUI. If you are still working on this, I would be glad to exchange notes here and see if we can't figure out a way to solve this. It's so strange to me that Unraid can be so polished in many aspects and fall flat in others, especially when it's paid software and this mature. Regardless, I am going to try some things tonight and will share what I come up with if noteworthy.
  9. That is odd, in Chrome it does not work either and I simply get ERR_SSL_PROTOCOL_ERROR. My configuration is pretty much the same, although some of the directives are are defined in the ssl.conf and proxy.conf files. Just to verify, I removed the proxy-conf file for mediawiki I created and added your config you shared above. I get the exact same results in Firefox and Chrome now, without the ability to connect via IP internally now though. Any thoughts there? Could it be an issue with letsencrypt and/or the cert maybe?
  10. I am using this container on Unraid behind the Linuxserver.io letsencrypt container. I see that you recommended that in your documentation, which is very good I might add. I have learned a lot from your notes, so thanks for that. I am still new to Nginx though, and am having some issues getting it to work properly with mediawiki. I have tried using the docuwiki proxy config in the letsencrypt container and changing the proto, ip, port as needed but still having no luck. Currently, I am able to access the mediawiki container via IP internally, but when attempting to use the domain name I end up with an error: SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG Can you share a configuration that works please? I would assume I am just directing it to the container IP:PORT with proxy_pass, but I can't semm to figure out the issue. I will note that I have a password enabled as well, just in case it is relevant. Thanks!
  11. I came here with the exact same problem after noticing that my /books folder had no content and the database and books were written to the /appdata path. If you would literally read the last page of this thread (this page), you would find the answer just as I did and wouldn't have to ask the same question. Searching and reading the threads would help in not cluttering the forums with the same question that has already been answered. As stated before, when you run the wizard upon setup, make sure you have the /disk/books path set correctly. Your content is being written to the image because the path for the library was changed via the wizard. As mentioned above, reading the posts on this thread should clue you into this.
  12. Well I tried both, but it was going in and editing existing entries and adding the http/https to beginning of link. It didn't work until I removed the container and appdata and reinstalled. Only then was I able to use http for each link and have it work. Thanks for the reply!
  13. Anyone know why every item I create on the page ends up with a hyperlink of "10.0.0.2:8443/10.0.0.100:80" instead of simply the address to the container or URI the item is for, 10.0.0.100:80 in this instance? I have tried completely wiping the container and various ways of entering the correct information, as well as searching for some file to edit in the configs with no luck.
  14. I think development has ceased on this product. I was using malvarez00's docker and while I didn't have the logs filling up, three different times in the past month I would try to log in and it would hang on starting database services. I found no issues in the logs and each solution I tried ended in a different problem. That support thread was way worse than this one, and no one was even checking it. I attempted to install this Docker and it will not even let me log in. I fire it up for the first time and create a local account, then it boots me to the login screen. Once I enter my creds, it hangs. Unfortunately I think out investment in these cameras was a bad idea and with no more support on the software, Ubiquiti has really lost my respect. I was starting to turn clients onto them for the ease of use, great price point, and amazing community but I have to say I will be searching for alternatives now. What a joke.
  15. Anyone still monitoring this thread? I upgraded to an SSD for my cache drive and migrated all Dockers over, my only issue occurring with this one. When booting, It was hanging on "Starting Database Services" screen, now it only loads up to screen "Error Starting Software Update Service, Read Operation to Server 127.0.0.1:7441 failed on database av." I assumed there was an issue with port 7441 so I added that, and still no luck. If this was a normal unifi NVR I could SSH in and run some commands to manually update the database. Unfortunately I am not familiar with Docker to get these same commands to run. Any suggestions without wiping and starting fresh? Below is most relevant info I can find in the logs: 1580054336.913 2020-01-26 10:58:56.913/EST: ERROR Error starting service: Read operation to server 127.0.0.1:7441 failed on database av in main com.mongodb.MongoException$Network: Read operation to server 127.0.0.1:7441 failed on database av at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:302) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:273) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCursor._check(DBCursor.java:498) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCursor._hasNext(DBCursor.java:621) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCursor._fill(DBCursor.java:726) ~[mongo-java-driver-2.13.2.jar:?] at com.mongodb.DBCursor.toArray(DBCursor.java:763) ~[mongo-java-driver-2.13.2.jar:?] at org.mongojack.DBCursor.toArray(DBCursor.java:404) ~[mongojack-2.5.1.jar:?] at org.mongojack.DBCursor.toArray(DBCursor.java:389) ~[mongojack-2.5.1.jar:?] at com.ubnt.common.super.new.D.o00000(Unknown Source) ~[airvision.jar:?] at com.ubnt.common.super.new.D.o00000(Unknown Source) ~[airvision.jar:?] at com.ubnt.common.super.new.D.new(Unknown Source) ~[airvision.jar:?] at com.ubnt.common.super.new.D.new(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.data.AbstractManager.findAll(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.service.OoOO.A.Ö00000(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.service.update.UpdateService.new.super(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.service.update.UpdateService.Ó00000(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.service.D.Ò00000(Unknown Source) ~[airvision.jar:?] at com.ubnt.airvision.service.D.Ó00000(Unknown Source) [airvision.jar:?] at com.ubnt.airvision.Main.o00000(Unknown Source) [airvision.jar:?] at com.ubnt.airvision.Main.start(Unknown Source) [airvision.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) [commons-daemon-1.0.15.jar:1.0.15] Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Can not construct instance of com.ubnt.airvision.data.Camera$Platform from String value 'GEN4L': value not one of declared Enum instance names: [GEN1, GEN2, GEN3L, GEN3LM, AIRVISION, AIRCAM, UVC]