MadMatt337

Members
  • Posts

    39
  • Joined

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MadMatt337's Achievements

Rookie

Rookie (2/14)

18

Reputation

  1. Here is what I had, looks like mine was likely the same issue as above seeing as it is the API filled up as well. 0 /var/log/pwfail 72M /var/log/unraid-api 0 /var/log/preclear 0 /var/log/swtpm/libvirt/qemu 0 /var/log/swtpm/libvirt 0 /var/log/swtpm 0 /var/log/samba/cores/rpcd_winreg 0 /var/log/samba/cores/rpcd_classic 0 /var/log/samba/cores/rpcd_lsad 0 /var/log/samba/cores/samba-dcerpcd 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores 3.6M /var/log/samba 0 /var/log/plugins 0 /var/log/pkgtools/removed_uninstall_scripts 4.0K /var/log/pkgtools/removed_scripts 24K /var/log/pkgtools/removed_packages 28K /var/log/pkgtools 0 /var/log/nginx 0 /var/log/nfsd 0 /var/log/libvirt/qemu 0 /var/log/libvirt/ch 0 /var/log/libvirt 77M /var/log
  2. Hi, Did a little bit of digging myself but nothing stood out to me as out of the norm that would be filling up my log. Everything is working as it should and I have not had this issue in the past several years running pretty much this same setup. Nothing has changed since the last restart other than updating to 6.12.6 23 days ago, which is my current uptime, have had uptimes of 4 months or more before and never had this come up. Can anyone spread some insight in to what I should be looking for? Diagnostics attached. jarvis-diagnostics-20240113-2035.zip
  3. I noticed today for the first time that I had one of my cache drives fill up before my nightly move, and although I had the setting in mover tuning setup to "Move All from Cache-yes shares pool percentage:" at 80% it did not enable the move function automatically. Is there a current glitch with this? Maybe since the change of the cache setup since they no longer use the terms cache-yes as such?
  4. Not sure if it is just myself as I don't see reference to it in previous posts but I have noticed my dockers are no longer automatically updating after backup. As far as I can tell this started happening after the 2023.08.16 update. The logs are showing that the dockers are all up to date but in this particular log I have 5 different dockers that have updates available. backup.log
  5. I agree with C4RBON in regards to temps, even the cheap add on heat sinks make a huge difference and IMO heat is one of the biggest killers of any SSD outside of pure write endurance. That is if your MB does not have heatsinks on the NVME's already. I have 3 NVME's on mine, all have been fine for over 2 years now, 2 are WD SN750's and one is just a WD SN550 which actually sees the most traffic as is my media cache (300Tb written, 200tb read) drive but the temps stay low (highest I have seen is 42C) and I have had 0 issues. 2 are MB built in heatsinks and the one is an aftermarket one, temps are pretty similar. I have heard good things about the new WD Red SN700 drives as well, still decent speeds but 2000TBW on a 1TB drive as compared to 600TBW of most conventional drives. Probably my next drive when my media one dies.
  6. Odd, I have been using the RC's right from 6.10 RC2 on and have never had a permission issue, no modifications to the base script. I did not upgrade from 6.9.2 or anything lower though, I did a fresh install of 6.10 RC2 and have been updating since then. That being said I did not let the script make any of my folders within the mergerfs folder, I made them myself after the script had everything mounted, maybe that is the difference in my case?
  7. Lots of different resources out there if you do a quick Google search. https://wiki.lime-technology.com/UnRAID_Manual_6#Physical_to_Virtual_Machine_Conversion_Process http://kmwoley.com/blog/convert-a-windows-installation-into-a-unraid-kvm-virtual-machine/
  8. I would add what exactly you do with your main computer, gaming? Competitive gaming? Video/photo editing? Ect. I have my server running with a 12600k, 3 1tb nvme drives, 6 4tb HDD. I use it for storage, Plex and supporting arr's, plus a dedicated windows 10 VM that runs 100% of the time that the wife uses 8-10 hours a day for work for mainly basic computing type tasks (Excell, word, browsing, video conferencing, email, ect.). I only have the VM using 4 cores 8 threads and 12GB of RAM, one NVME dedicated to it, a pcie usb card passed through (as I could not pass through any of the ones from my motherboard due to IOMMU groupings), and a little GT 1030 passed through for video. I have not performed any performance testing but she has not noticed any day to day use performance differences from when she was using my bare metal 5900x/RTX 3080 gaming PC of which there is obviously a drastic difference to. I have also played around a little with the VM before handing it over to her for work and it felt snappy and responsive during all the day to day type tasks I put it through when testing. So I would say it greatly depends on use case for the VM, and how much of the available resources you let it use. But there are people out there doing full gaming VM's off an unraid server with minimal performance differences from what I have heard, have not tried or done this myself.
  9. You could go this route, I was running it for a while when I wanted to keep an eye on things, worked very well for me.
  10. Just a thought I had for a possible visual addition to this fantastic plugin. Highlighting when hovering over a script to more easily identify which script you are on when over in the far right like in the log section for example. Kind of like is implemented in the new Dynamix File Manager plugin, or how things are highlighted in the main and shares tabs of Unraid 6.10.0-rc4.
  11. Ya I really like the highlighting on other pages like shares and main, but on the dashboard it looks pretty odd and feels clumsy to me.
  12. I don't do a ton of transcoding as most of my users are setup and able to direct play everything, but I have tested with 3 simultaneous 1080p - 720p and 1080p-1080p downgrades and it did not seem to stress the cpu too much, I was allowing full CPU access to plex, no pinning, did not test any software transcoding with 4k as I do not do/allow any 4k transcoding. I am only running a 12600k. I just have the /dev/dri removed from the plex docker setting and HW transcoding turned off in plex itself. I have i915 blacklisted in the config file, and I do still have Intel GPU TOP installed as well as GPU Statistics, not causing me any stability issues as long as it is not being used by plex (or anything for that matter) so I have not felt the need to remove them.
  13. This is what I am running currently, and my system has been 100% stable and running well for a couple months now running Plex (hw transcoding disabled) 10 other dockers for downloads, backup, ect. And a windows 10 VM with a Nvidia GPU passed through as well as a pcie usb controller that runs full time and gets used daily as my wife's work PC for now. No complaints.
  14. I ran a quick test tonight after upgrading to 6.10.0-rc3 and I was able to lock up the server on 3 separate occasions when transcoding via plex. Now I was actually able to see something on my syslog this time around (see below) unlike previously, I am assuming this is the same issue unless I have something else going on that is causing this? I am running a 12600k.
  15. I just figured I would post here as well since I just noticed I am getting this same message flooding my syslog running 6.10.0-RC2, appears to be only when my Windows 10 VM is running (hard to tell as it is nearly always running) but I can test this further if I need to. I did have it down for a backup on Feb 24th @ 19:40-20:59 or so and the errors disappear during that timeframe so I am pretty confident in the relation as well. Attached my logs for reference in case it is helpful. This is not causing me any issues as far as I can tell, system has been working well and is stable. Should I be worried about doing anything at this time to get rid of these errors or just ignore them for now? jarvis-diagnostics-20220302-1019.zip