Jump to content

JonathanM

Moderators
  • Posts

    16,706
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Then the TDP governor wouldn't be in use anyway. It will only kick in when the CPU is running hard for a sustained period, such as... So your overall consumption probably wouldn't have a detectable difference no matter whether you set the limit or not. The only thing you would notice is that your transcodes would take noticeably longer if you set a limit.
  2. If you want to keep the peak draw low, for heat dissipation, then keep the processor throttled as you described. If you want to conserve energy overall, then allow the processor to work as hard as it can so it completes the tasks sooner. The CPU is only a portion of the system power draw, and if the processor is throttled back it will take longer to accomplish the work given, thus keeping all the other parts of the system at full power for a longer period. As an example, imagine a 4K transcode of a large file. For the sake of the example, let's assume that forcing the processor to a low TDP makes the transcode take twice as long. That means the RAM, drives, motherboard, etc are all kept at high power feeding the CPU the data it's working on for twice as long, instead of allowing everything to go back to a low power state much sooner. Processor TDP is a parameter for specifying how much heat is allowed to be produced over time, typically a concern for laptops and sealed systems that can only dissipate a small amount of heat compared to a desktop with large fans and plenty of space. It is NOT primarily a measure of total energy efficiency. The amount of work a processor can accomplish with a given amount of power is largely determined by the layout and design of the CPU.
  3. That's normal without a GPU. Add a video card and use that if needed.
  4. VM performance can be VERY hardware specific, it's quite possible to get poor results from a build that looks better on paper. I recommend copying a working build part for part if possible.
  5. For best overall power, don't limit TDP. You will indeed limit max draw, at the expense of keeping the rest of the system at full power longer while it waits on the crippled CPU to finish the task.
  6. Man, I wish I could afford a 100Gbps connection. The jump to 10Gbps was pricey enough. Seriously though, it's just telling you that the virtio connection isn't limited the normal way, it's all done with emulation so the network communication is limited by the rest of the hardware, not the virtual ethernet connection. It's kinda funny reading your title of "only" though.🤣
  7. Non bootable should work, we are trying to get the best possible blank slate for the USB creator tool to work on.
  8. Try preparing the currently licensed stick using RUFUS. https://forums.unraid.net/topic/79732-unable-to-boot-to-usb/?do=findComment&comment=740522
  9. Works with binhex' version with a tweak. In the user script, replace /app/ with /usr/local/bin/ in the 5 copy commands that clearly say DO NOT EDIT. 🤣 If you wanted to get fancy, you could find . -name webui to locate the correct destination automagically.
  10. Yep. Parity is the sum of the bits across all the drives at a specific address. If the parity drive was smaller, it wouldn't have a spot to keep track of that bit.
  11. No. Data drives may not be larger than either parity drive. https://wiki.unraid.net/Parity
  12. This sounds familiar, it's possible the partition starting sector is wrong with the new controller, and rebuilding the drives one at a time would fix it. This depends on parity being valid before the drives were moved, and nothing modifying the drives in the meantime.
  13. This is controlled by the motherboard. Some boards have the option to change the primary card, some don't.
  14. Stock is RAID1. You manually rebalanced?
  15. Don't do that without being mentally prepared to undo it. Switching could break the apps.
  16. Search Youtube dl material in the apps tab. https://forums.unraid.net/topic/87798-support-selfhostersnets-template-repository/ https://github.com/Tzahi12345/YoutubeDL-Material
  17. Make sure you clean up after the mover. Having duplicate files and paths under /mnt/diskX/appdata and /mnt/cache/appdata won't end well if you are referencing /mnt/user/appdata anywhere. You didn't make it clear whether you were moving ahead with reformatting, or trying to revert to a working system with the existing pool.
  18. Theoretically forcing things to be mapped to a specific drive instead of a user share shouldn't cause major issues, but like you said, hidden consequences. Since you have CA Backup already doing its scheduled thing, personally I'd use the procedures already set forth in CA Backup's disaster recovery and use your daily backup to restore appdata after formatting the drive. Revert the appdata settings back to cache:only if you have /mnt/cache mapped, as setting it to the default cache:prefer could end up with files on an array disk under some circumstances. Before you start the docker and vm service do an audit of all the shares involved and make sure the data is all where it needs to be, domains and system should properly move back with the mover after setting them to cache prefer.
  19. Pretty sure CA Backup will handle this correctly, perhaps @Squid will pop in and confirm. The only reason I have any doubt is because I've never seen the mover method fail, so like you I'm concerned about why. My supposition is that the containers in question were mapped to /mnt/cache/appdata instead of /mnt/user/appdata like they should have been.
  20. https://forums.lime-technology.com/topic/61211-plugin-ca-appdata-backup-restore-v2/ It's not as elegant as the mover solution, but it should work to back up /mnt/user/appdata
  21. You certainly can cancel the parity check if you wish, but be aware that if you have a drive failure while parity is out of sync, the rebuilt drive will have bit errors. Often times that results in data loss.
  22. Easy first test, use a plain surge protector instead of the UPS and see how it behaves. Gut feeling says PSU, since normally PSU's can handle a remarkable amount of input voltage fluctuation without sagging the output, at least if they are running with a decent amount of reserve overhead. To have a system be that touchy leads me to think that the PSU or the motherboard power circuits are misbehaving. There is a decent amount of smoothing and filtering done on the motherboard as well, so that's a possible candidate besides the UPS and PSU.
  23. With those enclosures the SATA power connector can be the weak point. If you can, try to connect both enclosures with 1 connector from each lead from the PSU, so the two power leads on each enclosure are fed from different strands of the PSU. Ideally, you would want 4 separate leads, but your PSU may not have enough leads separated out. Possibly use one high quality 4pin to SATA power adapter on each enclosure to allow more wire capacity from the PSU.
×
×
  • Create New...