OverlordQ Posted July 31, 2023 Share Posted July 31, 2023 RM'd the /tmp/fix.common.problems/scanRunning file dated 4 days ago and then the pageload started working again. 1 Quote Link to comment
ljm42 Posted July 31, 2023 Share Posted July 31, 2023 On 7/29/2023 at 7:07 PM, Alyred said: Quick question: Any reason that FCP still tells me that I have multiple NICs on the same IP network? One of the cards is passed through to a VM. While I've got that warning set to "ignored", it's a valid configuration. Please upload your diagnostics.zip (from Tools -> Diagnostics). Any NIC passed through to a VM would not be visible to FCP, so there must be something else triggering the notification. Quote Link to comment
Eurotimmy Posted August 2, 2023 Share Posted August 2, 2023 One very minor thing. When clicking the "MONITOR WARNING / ERROR" button in the ignored list. The pop-up has the word "Read" spelt "Readd". Thank you for the great plugin! Quote Link to comment
BRiT Posted August 2, 2023 Share Posted August 2, 2023 Isn't that for "Re-add" as in add back in? 1 Quote Link to comment
tyme Posted August 2, 2023 Share Posted August 2, 2023 Hello, I have this plugin installed on my unraid server running version 6.11.1. It is being used as a NAS/File Server. I usually don't check to see if there is problems on my server as long as I can access the files from my system from a windows computer. I recently checked and noticed I'm getting a "possible hack attempt on Jul 14". I don't have this server on a DMZ. It is on the same network I have all my other servers. As far as I know, I have not set this up to be accessible from outside of my network. I have attached my syslog. I just recently took some of the steps located on https://unraid.net/blog/unraid-server-security-best-practices. Is there anything else I should do? Any advice will be greatly appreciated. Thanks. syslog.txt Quote Link to comment
SixFive7 Posted August 7, 2023 Share Posted August 7, 2023 Hi there, new user running Unraid on a AMD Ryzen 9 5950X. When I install FCP and let it run I always get this error: FCP notification: Machine Check Events detected on your server Output of mcelog: root@unraid:~# mcelog mcelog: ERROR: AMD Processor family 25: mcelog does not support this processor. Please use the edac_mce_amd module instead. CPU is unsupported Hourly repeats in the syslog: Aug 7 15:47:04 unraid root: Fix Common Problems: Error: Machine Check Events detected on your server Aug 7 15:47:04 unraid root: mcelog: ERROR: AMD Processor family 25: mcelog does not support this processor. Please use the edac_mce_amd module instead. Aug 7 15:47:04 unraid root: CPU is unsupported However, I'm unsure if there actually is an error as running mcelog doesn't ever yield any more details. I know the line Please use the edac_mce_amd module instead is harmless. But is FCP tripping over the line after that? Or is there really something wrong with my (otherwise very stable) hardware? Quote Link to comment
DevanteWeary Posted August 9, 2023 Share Posted August 9, 2023 Two questions: 1) What does the permissions fix do? What permission problems arise that need fixing? (new to Unraid). 2) Can we get an option to exclude update notifications? I'd like to set hourly checks but that spams me with "new version" notifications until I finally update my containers. Thank you for this tool! Quote Link to comment
Glassed Silver Posted August 15, 2023 Share Posted August 15, 2023 @Squid I'm really sorry to ping you, but I haven't found any comment from you on the docker update notification spam situation so far (maybe I didn't use the search correctly or used the wrong terms, idk...) Is it an issue you're aware of and that we can expect a fix for some time soon-ish? I don't think FCP really needs to check on any of my docker containers, I auto-update a lot and I also manually update some of them all on my own, if we could exclude that check, that'd be so helpful, especially as it would decrease the processing needed. Getting daily notifications about some dozen or so containers having an update available at a given moment is very unhelpful since it buries any possible errors that are left unseen since I notice myself just swiping away all of my Pushover notifications in bulk sometimes when I don't have time for minor stuff like that. In any case thank you so much for this wonderful plug-in! 1 Quote Link to comment
sjrahn Posted August 17, 2023 Share Posted August 17, 2023 (edited) I am assuming the write cache checking is done using hdparm. Would it be possible to use smartctl instead so that SAS drives can also be checked? Not sure what the equivalent sdparm command is, if there even is one, but this one below works for both SATA and SAS drives. smartctl -g wcache /dev/... I had run into write issues for the last couple of days and I finally narrowed it down to only 3 out of my 6 SAS drives have writeback cache enabled by default. I had no idea. Simple add to my write cache enabling script but I just assumed they were all enabled by default if it wasn't reported by this plugin. From the man page for smartctl: -g NAME, --get=NAME, -s NAME[,VALUE], --set=NAME[,VALUE] wcache[,on|off] - [ATA] Gets/sets the volatile write cache fea- ture (if supported). The write cache is usually enabled by de- fault. wcache[,on|off] - [SCSI] Gets/sets the 'Write Cache Enable' (WCE) bit (if supported). The write cache is usually enabled by default. Edited August 17, 2023 by sjrahn Quote Link to comment
HHUBS Posted August 19, 2023 Share Posted August 19, 2023 Yesterday, it was working then today it just stuck on this. Anyone having the same experience? 1 Quote Link to comment
lync Posted August 21, 2023 Share Posted August 21, 2023 diagnostics-20230821-1607.zip Quote Link to comment
diehardbattery Posted August 24, 2023 Share Posted August 24, 2023 Not sure where to look... nice number of percent full though lol... diagnostics-20230824-0926.zip Quote Link to comment
lance-tek Posted August 29, 2023 Share Posted August 29, 2023 I tried to run a diagnostic report and it seems to have filled my /var/log directory. I'm not linux smart but it appears to be enumerating every file on my server. The report/download has been running all day so far. is this how it should be working? Is there anything I can do to get the report to kick out instead of filling up the directory and then bombing? (I was running the report for troubleshooting a different problem where my array seems to just go offline and require a restart). Quote Link to comment
itimpi Posted August 29, 2023 Share Posted August 29, 2023 That is the logic trying to anonymise the diagnostics. You should be able to get it to complete by switching off the anonymization, although then you need to be more cautious about posting them publically, and should probably only PM them to selected helpers. 1 Quote Link to comment
razzellu Posted September 4, 2023 Share Posted September 4, 2023 (edited) /var/log is getting full (currently 100 % used) Either your server has an extremely long uptime, or your syslog could be potentially being spammed with error messages. A reboot of your server will at least temporarily solve this problem, but ideally you should seek assistance in the forums and post your More Information Looking online I found a terminal line find /var/log/ -printf '%s %p\n'| sort -nr | head -10 The results were: 72884224 /var/log/syslog.1 58486784 /var/log/nginx/error.log.1 2449408 /var/log/unraid-api/stdout.log 122880 /var/log/samba/log.rpcd_lsad 77359 /var/log/dmesg 61440 /var/log/samba/log.samba-dcerpcd 24576 /var/log/libvirt/libvirtd.log 15496 /var/log/docker.log 10291 /var/log/libvirt/qemu/Windows 11.log 6912 /var/log/wtmp razz-diagnostics-20230903-2219.zip Edited April 5 by razzellu The issue after many, many trial and errors, was keeping my unRAID page open on two windows machines. Quote Link to comment
atakan Posted September 4, 2023 Share Posted September 4, 2023 How i can fix that? Invalid folder cache contained within /mntGenerally speaking, most times when other folders get created within /mnt it is a result of an improperly configured application. This error may or may not cause issues for you More Information unraid-server-diagnostics-20230904-1010.zip Quote Link to comment
thepix Posted September 7, 2023 Share Posted September 7, 2023 (edited) Hi there, Anybody able to help figure out why I'm having this error, by all accounts I shouldn't really have this problem for 2 months of uptime? I've also attached the Diagnostic files. babylon-diagnostics-20230907-2106.zip Edited September 7, 2023 by thepix Quote Link to comment
Squid Posted September 7, 2023 Author Share Posted September 7, 2023 You have a docker container which is continually restarting. It should be pretty easy to figure out which one by going to Docker and switching to advanced view to see the uptime and then going from there to figure out why 1 Quote Link to comment
thepix Posted September 7, 2023 Share Posted September 7, 2023 32 minutes ago, Squid said: You have a docker container which is continually restarting. It should be pretty easy to figure out which one by going to Docker and switching to advanced view to see the uptime and then going from there to figure out why I seem to have a couple candidates "plextraktsync" and "plex-image-cleanup". Kinda need plextraktsync, I'll stop the other for now. Quote Link to comment
couzin2000 Posted September 10, 2023 Share Posted September 10, 2023 (edited) I'm not sure how to go about fixing things here, so I'll explain what happened and you can help me solve this. I initially installed 2 SSDs as "cache" drives. I configured the cache so that both drives were codependent as BTRFS formats. But there were issues, BTRFS wasn't very reliable at the beginning, there were some write errors, and I was also very unfamiliar with this sort of setup. So when error messages started to show, I reacted oddly and I removed one of the drives from the BTRFS codependant scenario. Instead, I created a new pool called Docker, and installed the removed drive into this pool as an XFS format. The remaining drive in Cache was untouched. I basically assigned appdata to go to Docker, rather than leave it as it was. In my mind, this was all logical - using Cache for cacheing, and using Docker for all things Docker-related. Forward a year, and now I run into issues with a Minecraft server manager Docker app (Crafty-4). I'm running into error message saying that certain files have to be written to BTRFS but the folders are read-only. This is not part of my intended setup. I recently installed "Fix Common Problems". Here are the two errors I get which are related to this and which I'm not sure how to address: -Unable to write to Docker Image -- Docker image either full or corrupted. Investigate Here: [DOCKER SETTINGS] -Share appdata set to use pool Docker, but files / folders exist on the cache pool. Not sure how to address the first because at the Docker page, my BTRFS scrub error summary says "no errors found". The drive is far from full. Not sure where to go from here. For the second, I *am* willing to correct the issue but I want to make sure I have copies of everything first. I'm not sure how to go about and create a backup of all this. sebunraid1-diagnostics-20230909-2330.zip There. If anyone is up for it, please sound off here and let me know how I could proceed. Edited September 10, 2023 by couzin2000 bad formatting Quote Link to comment
suritech Posted September 11, 2023 Share Posted September 11, 2023 (edited) Download of appfeed failed. Community Applications requires your server to have internet access. The most common cause of this failure is a failure to resolve DNS addresses. You can try and reset your modem and router to fix this issue, or set static DNS addresses (Settings - Network Settings) of 208.67.222.222 and 208.67.220.220 and try again. Alternatively, there is also a chance that the server handling the application feed is temporarily down. See also this post for more information Last JSON error Recorded: Syntax error I've just installed a new cache and moved stuff back over and am trying to download apps. My dns was of cloudfare's but the community apps wouldnt load. Then I changed my DNS to 8.8.8.8, and it worked for about 2-3 minutes then back to the same issue. Then I changed my DNS to 8.8.4.4, it worked for another 2-3 minutes and same issue again. I'm losing my mind! Under fix common problems, I've even now got the "Unable to communicate with GitHub.com" - try changing DNS to 8.8.8.8 / 8.8.4.4 -ip is static, no firewalls/vm's/pihole's etc.. --------------- Edited September 11, 2023 by suritech Quote Link to comment
yogy Posted September 11, 2023 Share Posted September 11, 2023 On 8/19/2023 at 11:38 AM, HHUBS said: Yesterday, it was working then today it just stuck on this. Anyone having the same experience? I have an exact same issue. I'm on 6.12.4. No update of the app available. 1 Quote Link to comment
xokia Posted September 11, 2023 Share Posted September 11, 2023 (edited) I go to click install an get nothing. If I go to installed plugins it does not show up. What am I missing? *edit* got it to work for some reason the USB drive switched to read only. Now I have it installed. So this will keep a log after a crash? Where does it save the log file? Edited September 11, 2023 by xokia Quote Link to comment
yogy Posted September 12, 2023 Share Posted September 12, 2023 14 hours ago, yogy said: I have an exact same issue. I'm on 6.12.4. No update of the app available. And today the issue is suddenly gone. Strange. Quote Link to comment
Squid Posted September 12, 2023 Author Share Posted September 12, 2023 20 hours ago, xokia said: So this will keep a log after a crash? Where does it save the log file? For logs after a crash you have to enable the syslog server (mirror to flash) Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.