MadMatt337

Members
  • Posts

    39
  • Joined

Posts posted by MadMatt337

  1. 6 hours ago, JorgeB said:

    Post the output of

     

    du -h /var/log

     

     

    Here is what I had, looks like mine was likely the same issue as above seeing as it is the API filled up as well.

     

    0       /var/log/pwfail
    72M     /var/log/unraid-api
    0       /var/log/preclear
    0       /var/log/swtpm/libvirt/qemu
    0       /var/log/swtpm/libvirt
    0       /var/log/swtpm
    0       /var/log/samba/cores/rpcd_winreg
    0       /var/log/samba/cores/rpcd_classic
    0       /var/log/samba/cores/rpcd_lsad
    0       /var/log/samba/cores/samba-dcerpcd
    0       /var/log/samba/cores/winbindd
    0       /var/log/samba/cores/smbd
    0       /var/log/samba/cores
    3.6M    /var/log/samba
    0       /var/log/plugins
    0       /var/log/pkgtools/removed_uninstall_scripts
    4.0K    /var/log/pkgtools/removed_scripts
    24K     /var/log/pkgtools/removed_packages
    28K     /var/log/pkgtools
    0       /var/log/nginx
    0       /var/log/nfsd
    0       /var/log/libvirt/qemu
    0       /var/log/libvirt/ch
    0       /var/log/libvirt
    77M     /var/log

     

  2. Hi,

     

    Did a little bit of digging myself but nothing stood out to me as out of the norm that would be filling up my log. Everything is working as it should and I have not had this issue in the past several years running pretty much this same setup. Nothing has changed since the last restart other than updating to 6.12.6 23 days ago, which is my current uptime, have had uptimes of 4 months or more before and never had this come up.

     

    Can anyone spread some insight in to what I should be looking for? Diagnostics attached.

    jarvis-diagnostics-20240113-2035.zip

  3. I noticed today for the first time that I had one of my cache drives fill up before my nightly move, and although I had the setting in mover tuning setup to "Move All from Cache-yes shares pool percentage:" at 80% it did not enable the move function automatically. Is there a current glitch with this? Maybe since the change of the cache setup since they no longer use the terms cache-yes as such?

  4. Not sure if it is just myself as I don't see reference to it in previous posts but I have noticed my dockers are no longer automatically updating after backup. As far as I can tell this started happening after the 2023.08.16 update. The logs are showing that the dockers are all up to date but in this particular log I have 5 different dockers that have updates available.

    Screenshot_20230825_115257_Chrome.jpg

    backup.log

  5. I agree with C4RBON in regards to temps, even the cheap add on heat sinks make a huge difference and IMO heat is one of the biggest killers of any SSD outside of pure write endurance. That is if your MB does not have heatsinks on the NVME's already. I have 3 NVME's on mine, all have been fine for over 2 years now, 2 are WD SN750's and one is just a WD SN550 which actually sees the most traffic as is my media cache (300Tb written, 200tb read) drive but the temps stay low (highest I have seen is 42C) and I have had 0 issues. 2 are MB built in heatsinks and the one is an aftermarket one, temps are pretty similar.

     

    I have heard good things about the new WD Red SN700 drives as well, still decent speeds but 2000TBW on a 1TB drive as compared to 600TBW of most conventional drives. Probably my next drive when my media one dies.

  6. 11 hours ago, Bolagnaise said:

    @DZMM I @ you in this thread but for awareness of everyone else using RC, seems to be a permissions error with rc builds as LT updated how the permissions are managed from 6.9.2

     

     

    Odd, I have been using the RC's right from 6.10 RC2 on and have never had a permission issue, no modifications to the base script. I did not upgrade from 6.9.2 or anything lower though, I did a fresh install of 6.10 RC2 and have been updating since then. That being said I did not let the script make any of my folders within the mergerfs folder, I made them myself after the script had everything mounted, maybe that is the difference in my case?

  7. 17 minutes ago, root_is_my_friend said:

    Is there an easy onboarding solution to move a harddrive with Windows 10 or 11 to a VM? That would be a great feature for beginners.

     

    Lots of different resources out there if you do a quick Google search. 

     

    https://wiki.lime-technology.com/UnRAID_Manual_6#Physical_to_Virtual_Machine_Conversion_Process

     

    http://kmwoley.com/blog/convert-a-windows-installation-into-a-unraid-kvm-virtual-machine/

     

     

    • Like 1
  8. I would add what exactly you do with your main computer, gaming? Competitive gaming? Video/photo editing? Ect.

     

    I have my server running with a 12600k, 3 1tb nvme drives, 6 4tb HDD. I use it for storage, Plex and supporting arr's, plus a dedicated windows 10 VM that runs 100% of the time that the wife uses 8-10 hours a day for work for mainly basic computing type tasks (Excell, word, browsing, video conferencing, email, ect.). I only have the VM using 4 cores 8 threads and 12GB of RAM, one NVME dedicated to it, a pcie usb card passed through (as I could not pass through any of the ones from my motherboard due to IOMMU groupings), and a little GT 1030 passed through for video. I have not performed any performance testing but she has not noticed any day to day use performance differences from when she was using my bare metal 5900x/RTX 3080 gaming PC of which there is obviously a drastic difference to. I have also played around a little with the VM before handing it over to her for work and it felt snappy and responsive during all the day to day type tasks I put it through when testing. 

     

    So I would say it greatly depends on use case for the VM, and how much of the available resources you let it use. But there are people out there doing full gaming VM's off an unraid server with minimal performance differences from what I have heard, have not tried or done this myself. 

    • Like 1
  9. Just a thought I had for a possible visual addition to this fantastic plugin. Highlighting when hovering over a script to more easily identify which script you are on when over in the far right like in the log section for example. Kind of like is implemented in the new Dynamix File Manager plugin, or how things are highlighted in the main and shares tabs of Unraid 6.10.0-rc4.

  10. 21 hours ago, tombunraider said:

     

    This sounds like the setup I'll need to go with!  The system should be powerful enough to do transcoding in CPU.

     

    I've forgotten how to disable Plex HW transcoding. Is it as simple as removing the passthrough /dev/dri in Docker settings, or did you change settings in Plex itself?

     

    Finally, how does your /boot/config/modprobe.d/i915.conf look (force_probe or blacklist), and do you have the Intel GPU TOP plugin installed or uninstalled?

    I don't do a ton of transcoding as most of my users are setup and able to direct play everything, but I have tested with 3 simultaneous 1080p - 720p and 1080p-1080p downgrades and it did not seem to stress the cpu too much, I was allowing full CPU access to plex, no pinning, did not test any software transcoding with 4k as I do not do/allow any 4k transcoding. I am only running a 12600k.

     

    I just have the /dev/dri removed from the plex docker setting and HW transcoding turned off in plex itself.

     

    I have i915 blacklisted in the config file, and I do still have Intel GPU TOP installed as well as GPU Statistics, not causing me any stability issues as long as it is not being used by plex (or anything for that matter) so I have not felt the need to remove them.

  11. 23 hours ago, accelaptd said:

    If I'm happy to wait on the iGPU P/T are there any other catches or active issues running Alder Lake or specifically an i5-12600K/i7-12600K? I've never been on the cutting edge of hardware before, but I'm also really keen to get the new system up and running.

    1 hour ago, dimes007 said:

    I'm in your camp.   I came to the conclusion that afaict, other than GPU issues, we'll be fine.   My intent is to run RC3, pass through a discrete GPU and let the CPU transcode plex until the support is fully there on 12600k.

     

    This is what I am running currently, and my system has been 100% stable and running well for a couple months now running Plex (hw transcoding disabled) 10 other dockers for downloads, backup, ect. And a windows 10 VM with a Nvidia GPU passed through as well as a pcie usb controller that runs full time and gets used daily as my wife's work PC for now. No complaints. 

    • Upvote 2
  12. I just figured I would post here as well since I just noticed I am getting this same message flooding my syslog running 6.10.0-RC2, appears to be only when my Windows 10 VM is running (hard to tell as it is nearly always running) but I can test this further if I need to. I did have it down for a backup on Feb 24th @ 19:40-20:59 or so and the errors disappear during that timeframe so I am pretty confident in the relation as well. Attached my logs for reference in case it is helpful.

     

    This is not causing me any issues as far as I can tell, system has been working well and is stable.

     

    Should I be worried about doing anything at this time to get rid of these errors or just ignore them for now?

    jarvis-diagnostics-20220302-1019.zip

  13. I have the non-XL version of the Meshify case for my server right now. I have only ever had 6 drives in it though, all mounted up front directly behind the fans. I have 3 noctua 140mm fans up front and the 2 fractical fans out back as exhaust (one in the factory rear position and one exhausting out the top). My processor (12600k) is air cooled by a noctua NH-D15 pushing rearward out the back of the case. During heavy disk use like a parity rebuild the worst I have ever seen the drives get is around 33-34. They typically sit spun up and idle around 28-29. They are spaced out up front with a drive space in between each of them so lots of air flow right now, with more drives and less spacing I am sure the temps will go up a couple degrees but not drastically. 

     

    Side note I also don't have the fans spinning right up, they are controlled by drive temps but never get spun hard with the drives only getting to 33 or so. I am sure I could keep them even cooler if I let the fans go full bore. 

    20211222_125445.jpg

  14. On 2/23/2022 at 3:16 AM, do_ren said:

    I heard 12th gen processors don’t work well with unraid, looking at the forum.


    The 10920X looked good for me in combination with the motherboard and the  amount of lanes and the pcie slots.

     

     

    Only real issues that I know of currently are a few NIC cards not being recognized depending on the motherboard, an issue with random shutdowns with Intel i915 module if using the IGPU (not limited to 12th gen), and currently no HDR tone mapping if HW transcoding with plex with the IGPU. Mine has been running solid for a few months now, zero issues with hardware or anything related to the 12th gen processor, I am using it for plex, all my storage, a dedicated windows 10 VM with a separate GPU and PCIE USB controller passed though, and a fair amount of other small dockers for downloads, backups, ect. I am sure a lot of these small quirks are going to be ironed out in a fairly decent time (probably when they release 6.10 as a stable release, if they even affect your desired setup. I personally don't do any 4k transcoding, or much transcoding at all from my plex server so for now I just have the HW transcoding turned off (IGPU not passed to the Plex container and sits inactive) and have never had a crash, and the NIC on my MB (ASRock Steel Legend Z690) works fine running 6.10-RC2.

     

    Do you plan on a bunch more PCI devices in the future than what you have listed? Would also be able to eliminate that thunderbolt card with most Z690 boards as they typically have type-c 3.2 Gen 2 right on them (assuming that is what you were going for). Seems like there would be more than enough PCIe lane headroom for you use case, plus PCIe 5 and Gen 4 M2 slots. I would would also consider going for more RAM if running both the gaming and streaming VM along side unraid with plex and such running all at the same time.

    • Like 2
  15. 6 hours ago, SimonF said:

    For any one that is interested I was running a stress test this morning on a Win10 vm to show clock speeds for a question that had be posted by titus1 in another thread.

     

    This was all p-cores allocated to the VM.

     

    image.thumb.png.38f9fc38e392391097641261427f307e.png

     

    With all p-cores and all bar one e-core

     

    image.thumb.png.998e486696f16343643f7b22d284e2b3.png

     

    and idle

     

    image.thumb.png.6e8e4a184a2a8da99461bd5c4a2d2310.png

    Mind me asking what CPU cooler you were running during this test?

  16. Not trying to nitpick or anything, just curious why the 10920X on the intel side? If it were me at that price point for what you are talking use case I would consider 12900k or kf if you don't want/need the IGPU on the intel side. It would probably be my pick right now personally at the price ranges you are talking, and I am running a 5900X with a 3080 in my dedicated gaming rig right now and a 12600k in my server. 5900x will be the last of it's run on that socket/chipset, as would be the 10920 (11 series was just a waste of time IMO). At least with the 12900K you are on  a new socket/chipset that should allow for some future processor upgrades should you feel the need to down the road without a new MB as well, I would just stick to DDR4 when selecting a MB, DDR5 is still super pricey and hard to get. Not to mention the performance gains to DDR5 currently seem pretty negligible when compared to the price difference.

     

    That's my 2c, take it for what its worth.

  17. Just a thought for a possible future addition to the Auto Fan Control plugin if it has not already been suggested and is even possible to implement. I really like the plugin and it works great but the option to do different thresholds for different drives would be a nice feature to have. My thought process on this is NVME drives in the system are obviously running much warmer than the HDD's, while excluding them does work, it would be nice to be able to just set a higher threshold for the chosen drives so they can also be monitored and have the fan speeds go up should they start to warm up under heavier loads without having the fans run excessively fast if they are kept under the same threshold as the HDD's.

  18. 1 hour ago, Jharris1984 said:

    Hi all,

     

    I just went through the steps to setup clone, google drive, etc. per the directions on this forum however I have a question. My mount_rclone folder is currently using local data, a fair amount in comparison to total. I was wondering what could be causing this or what I could check? Everything in this folder is uploaded to the cloud already.

     

    If anyone asks I’m currently running this machine off a 240gb SSD as a test. My main server (little over 150tb) has unfortunately become a hindrance physically and I need to down size it’s size into something smaller. Thinking a node 304 which can only accommodate six drives and I have 14 currently so I’m really hoping to make this work until I can use my server rack again sometime down the road.

     

    Thanks in advance.

    158188E3-3DC6-48BA-B45B-FD6C90475154.png

    My first guess would be the cache, what do you have set as the cache size on your mount script? I believe this is set at 400GB in the original script which would obviously be a little high for your test rig with only a 250GB drive in place.

     

    RcloneCacheMaxSize="XXXX" # Maximum size of rclone cache

    • Thanks 1
  19. Not sure if it is a bug on my end or what and please excuse my lack of knowledge on the subject as I am quite new to this.

     

    I have a script that is running hourly, if the script is still running and the scheduler runs the script again I lose the "running" text and "abort script" button. Now I have not been able to watch this to confirm if it is happening on the first run attempt or say 2-3 times if the script is running for an extended period but it definitely eventually disappears and both the logs and a ps command show the script still running correctly.

     

    Not a huge deal in most instances but it would be nice to quickly see at a glance if it is still running or not and have the option to abort it quickly if I am working on something and need to do a reboot. I am running the latest version of user scripts (2021.11.28) on 6.10.0-rc2.

     

    edit: Just for reference, the script in my scenario is just a rclone move script.

  20. Have been trying to get Authelia setup but encountered this in my logs when trying to start, went back through my config a few times over but cant seem to find where I went wrong, any thoughts?

     

     

    Quote

    time="2022-01-24T02:19:40-06:00" level=info msg="Authelia v4.33.2 is starting"
    time="2022-01-24T02:19:40-06:00" level=info msg="Log severity set to debug"
    time="2022-01-24T02:19:40-06:00" level=info msg="Storage schema is being checked for updates"
    time="2022-01-24T02:19:40-06:00" level=info msg="Storage schema is already up to date"
    time="2022-01-24T02:19:40-06:00" level=debug msg="Notifier SMTP client attempting connection to smtp.gmail.com:587"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP client connected successfully"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP server supports STARTTLS (disableVerifyCert: false, ServerName: smtp.gmail.com), attempting"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP STARTTLS completed without error"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP server supports authentication with the following mechanisms: LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP client attempting AUTH PLAIN with server"
    time="2022-01-24T02:19:41-06:00" level=debug msg="Notifier SMTP client authenticated successfully with the server"
    time="2022-01-24T02:19:41-06:00" level=fatal msg="Error initializing listener: listen tcp 192.168.1.12:9091: bind: cannot assign requested address" stack="github.com/authelia/authelia/v4/internal/server/server.go:183 Start\ngithub.com/authelia/authelia/v4/internal/commands/root.go:79 cmdRootRun\ngithub.com/spf13/[email protected]/command.go:860 (*Command).execute\ngithub.com/spf13/[email protected]/command.go:974 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:902 (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10 main\nruntime/proc.go:255 main\nruntime/asm_amd64.s:1581 goexit"

     

  21. Take a quick read through these 2 threads, but a quick coles notes on the subject is HDR tone mapping does not work yet with the 12th gen IGPU's, I recently tested this myself and was able to HW transcode on my setup (12600k) 4k HEVC/H.265 file correctly with HDR tone mapping disabled, but not enabled. There has also been some stability issues reported with the i915 module (not just for 12 series), I have only experienced this crash one time with my setup so far when trying to download via plex (transcoding down to a smaller file size for offline mobile use). But I honestly don't transcode much as nearly all of my content is direct streamed, so I have disabled my HW transcoding for now until this issue is resolved. But seeing as you are currently running ok with your setup, maybe this wont be an issue for you, but figured it was worth noting just in case.

     

     

  22. On 1/20/2022 at 10:36 PM, SP410 said:

    Hey @MadMatt337, apologies if you have done this already.  I haven't found.  Would you mind posting you IOMMU Groups for the Z690 Steel Legend.  That board looks like a great deal right now.  

    No problem, this is the IOMMU Groups with no ACS overrides, if you want to see with overrides I can get them later tonight, just cant restart right now.

    Screenshot 2022-01-22 110055.png

    • Like 1
  23. 4 hours ago, wgstarks said:

    This would probably be better posted in Feature Requests (if it’s not already). 
     

    It is possible to do this with the User Scripts plugin.

    https://forums.unraid.net/topic/100202-latest-super-easy-method-for-automated-flash-zip-backup/

     

    Sorry if this was the wrong section to post this in, just going by the header description it seemed relevant. Thanks for the link though, have not seen this one before, will definitely be implementing it to make thing easier.

     

    Quote

    FEEDBACK

    Want to give us some feedback about existing My Servers features?  What about ideas for future features?  Post here and share your insights with us!

     

  24. Just thought I would post up here about a feature I thought would be helpful in future updates if possible. Would be nice to have more than just one backup on file, I recently had an instance where an issue I had was not noticed before a new backup was created automatically and I was unable to roll back to a previous backup.