Koenig

Members
  • Posts

    74
  • Joined

  • Last visited

Everything posted by Koenig

  1. I never found a solution to this, so I have lived with it up until today when I updated the "unraid-api-plugin", so I guess that was the culprit somehow. Now everything is snappy again and I'm happy!
  2. Will try. The thing is it ran just beatifully until I had a powerout (I wrote crash earlier, but it was actually a powerout), after it rebooted it was starting to go slow, it's like 15-25 seconds after I push anything or change tab, sometimes the browser (chrome) opens a popup that says the website has become unresponsive ans ask if I wish to wait or something else. VM's and dockers works just fine anyway, it just annoying when it is that slow you know....
  3. Hi! My servers web-UI became very slow after a crash and I can't figure out why, I have tried rebooting it several times with no luck. It's been like this for a couple of weeks everything else works so due to lack of time it has just been running anyway, until today I also noticed that it won't respond to me manually invoking the mover (via the button) thinking it might be related. It might also be 2 issues. Anyway I attched my diagnostics and hope this great community can come up with what my problem is. unraid-diagnostics-20231023-1706.zip
  4. It will eventually get updated, but probably not until there´s a 6.12.1 stable. If that is the only course of action we have to wait, it is not like I will forget the issue anyway as there's plenty to remind me in the log
  5. Unfortunately me changing this so the setting says "ToK" on both servers didn't help, lines like the ones below continue to come frequently: May 4 02:28:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1182285 May 4 02:54:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1291108 May 4 02:57:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1299810 May 4 02:58:02 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1304178 May 4 03:17:17 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1384761
  6. Yeah, the screenshot is from a unraid-server running 6.11.5 that is the one that contains the share that I'm mounting on one with unraid 6.12.0-rc5, wich is the one that is throwing CIFS-lines in the log. Attaching diagnostics from both servers. "unraid" runs RC5, "nas" runs 6.11.5 (stable) A courious thing here is that "nas" (stable) also have a mounted SMB-share from "unraid" but is not throwing those errors into the log, while "unraid" (RC5) has a mounted share and thows a lot of those lines in the log EDIT: Fixed the quote. nas-diagnostics-20230503-2019.zip unraid-diagnostics-20230503-2019.zip
  7. Yeah, sorry about that, the NFS-share was an attempt to get rid of the messages in the log, but I couldn't get write permissions on the NFS-share (not beeing familiar with how linux networks shares work) so it is not beeing used, but this is not how it has been setup in the long run, just the last couple of days - and it did not lessen or increase the amount of those messages in the log. But I agree with you, it is not meant to be that way, I have been using the SMB-share for years though.
  8. It is a SMB-share from a Unraid server, running latest stable.
  9. I'm still getting a lot of this in my log: May 3 07:40:54 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3047123 May 3 07:56:07 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3110621 May 3 08:03:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3143452 May 3 08:23:18 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3224409 May 3 08:34:51 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3272533 May 3 08:35:23 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3274724 Any idea what could be causing it? unraid-diagnostics-20230503-1011.zip
  10. Both servers are on 24/7 the NAS has an uptime of 44 days as for now, had a power-outage back then otherwisw it would have an uptime since last OS-upgrade. I have the NAS set as master already but the line with "os level = 255" I have not, I will add it. They both act as client and server. On "NAS" I have a share "Backups" wich is then mounted on the other server, just namned Unraid. On "Unraid I have a share "ISOs" wich is then mounted on NAS, I thought it to be a good way of sharing the storage of ISO's for VM-deployment. This setup I have had for atleast 3 years now, should be 4 in just a matter of a month or so, but I have never before seen the log filling up with that message, I cannot be sure but I would bet good money on it not beeing so in RC2 even, due to it beeing some time between, and the fact that I do check to logs intemittently.
  11. I have all clients on my network set to use DHCP, and then I manage all IP and network from my pfSense router. They have had the same IP since the day they first connected to my network. Again I do not know if this is the best way to go about it but I started doing it this way long before I got my first Unraid-server and it has been working well, but as I mentioned earlier if this is a bad way to do it I'm open to suggestions.
  12. You meaning it is not set to share? I don't want to share it from the server where I use unassigned devices to mount it, it is just supposed to be mounted so I can be able to map it in the docker "duplicati" On the other server (backup server - NAS) it is set to share on the network and from there it is visible on the network. Perhaps there's a better way of doing this, but this way has worked for atleast 3 years now, but I'm open to suggestions if there's a better way to go about it. EDIT: Both servers are Unraid, but the "backup"-server is on the latest "stable" version. I usually wait a good while before updating that one. EDIT2: Attached a fresh diagnostics EDIT3: Attached a diagnostics from "NAS" as well. unraid-diagnostics-20230419-1014.zip nas-diagnostics-20230419-1021.zip
  13. Might very well be, but I've not seen them before, that I can remember... But now they are very frequent, as I run "Duplicati" as a docker and backup some shares and appdata (different shares on diffrent days) to another Unraid server and have been doing that for atleast a couple of years and can't remember something like this filling up the log.
  14. I'm getting a lot of these: Apr 18 15:35:49 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4468347 Apr 18 15:36:20 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4492680 Apr 18 15:43:07 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4647123 Apr 18 15:48:56 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4751101 Apr 18 15:58:03 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4894247 Apr 18 15:58:34 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4915435 Apr 18 16:03:48 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4989147 Apr 18 16:04:50 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5040925 Apr 18 16:05:22 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5059100 Apr 18 16:09:37 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5093655 Apr 18 16:10:09 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5115018 Apr 18 16:10:40 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5139303 Apr 18 16:11:11 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5166674 Apr 18 16:15:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5216088 Apr 18 16:16:58 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5256386 Apr 18 16:22:48 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5362106 Apr 18 16:23:51 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5404597 Apr 18 16:24:23 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5429570 Apr 18 16:29:43 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5500655 Apr 18 16:31:17 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5564601 Apr 18 16:35:34 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5608888 Apr 18 16:36:37 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5651233 Apr 18 16:37:08 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5675616 Apr 18 16:37:39 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5690590 Apr 18 16:43:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close interrupted close Apr 18 16:43:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close cancelled mid failed rc:-9 Since upgrading to RC3 and transferring files between my servers. unraid-diagnostics-20230418-1720.zip
  15. A question on that, does "amd_pstate=passive" not work without ACS override enabled? I don't have it enabled, and if I were to enable it it would probably mess up my hardware passthroughs. I tried to google it but I couldn't rellay find any definitive answer so I'm going to ask you as you seem to know things about this - I have an AMD 3970X, would I benefit anything if I added "amd_pstate=passive" to syslinux.cfg?
  16. Me beeing a linux-noob, how do I do this in Unraid?
  17. I haven't tried but shouldn't it be possible to use the mover to avoid doing a complete wipe? Something like this: change share setting --> move files to array --> take the array offline --> reformat cache --> change share setting --> move files back to cache
  18. Thank you! Now I have my VM-tab back.
  19. I ran into the problems with all VM's gone on the VM-tab, but I can still se and edit them on the dashboard, all but one, when I try to edit my mac-os VM the edit-page is just blank. I can not remove the VM either, wich I could with some of the others. I ran the update- chech before I updated and it said I had one incompatible plugin gpu-statistic, wich I then removed before update. What would be my next step to getting my VM-tab back? unraid-diagnostics-20230322-0535.zip
  20. Hi! Is it possible to change the flashdrive while the server is online? Just got a dreadded message that a key is missing and that the flash-drive probably are corrupt, but the array and everything is working as is should for now. So is there a way to change the flash-drive while the system still is "live"?
  21. No that diagnostics was the second thing I did with the server after the reboot. (the first thing was to start the VM "Daily-driver", via my mobile phone, wich is my "daily-use-computer")
  22. Yes, I was aware of this and was thinking of mentioning it, I just hadn't rebooted due to an ongoing parity-check. The parity-check was on 96% or something like that so I "forced" it, and I have now rebooted. Guess what.... It now works to create the diagnostics via the "Tools". EDIT: I would like to give a big thanks to you guys, this community is amazing!! unraid-diagnostics-20220727-2024.zip
  23. Via the web-terminal it creates a diagnostics-file, but if I use the the "tool" it doen't create a file. The file ending in 35 is the one created via web-terminal, the one ending in 41 just doesn't get created. unraid-diagnostics-20220727-1935.zip