aburgesser

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by aburgesser

  1. Ditto. I'm starting to wonder if I'm shadow banned. I can't even get acknowledgement, much less answers to my questions on the forums. Very disappointing when the community was one of the hyped features 5 years ago. Not hard stuff either. Things like "is this behavior expected" or if a proposed solution is worth trying.
  2. No one publishes idle draw specs. Best you can do is see what people claim as idle power for different systems and/or apply rules of thumb. TDP would estimate max draw if it wasn't gamed so much. Neither Intel nor AMD is being reasonable with that spec anymore. The conventional wisdom is to slap an i3 on an ITX board if you want a power efficient Unraid box. There are more exotic solutions, but they will compromise on cost/convince/features.
  3. Problem is you need the boot loader to be able to pull the GUID. LT has been resistant to calls for alternate binding (or even boot migration) for a while now. I would use the internal storage as a cache pool. If size is a concern, don't use it to cache the array shares, but instead host the appdata and system shares.
  4. As always you need to declare use case. A 6700 can be quite a decent workhorse. If you can settle for the older Quick Sync, it will even transcode H264 and H265 (8-bit). I still think a 6300 is a fairly good Unraid CPU due to the ECC support. Just remember to drop the overclock for stability. 16 gb of RAM is serviceable until you want to run more than a bare bones VM. There are enough M.2 slots to run mirrored cache pools. A plus for redundancy. I've happily commissioned less capable systems myself.
  5. I was mostly asking about SMB behavior as less technically inclined users of my servers may execute a move between shares using SMB. It hammers the network, but moving files via explorer is their workflow. Getting other users to change behavior is typically not viable. I probably will continue to use Krusader. I find it a faster workflow than the Dynamix File Manager. I'll just need to be mindful of this behavior. I'll probably do cache to disk transfers to avoid the issue unless Mover of Fix Common Issues are extended to handle files orphaned on a cache. Edit: SMB behavior is a true move. Explorer interprets the two shares as not the same mount and defaults to a copy for drag and drop.
  6. Is it intended for Fix Common Problems to address files orphaned on a cache from a move between shares (https://docs.unraid.net/unraid-os/manual/troubleshooting/#mover-is-not-moving-files)? "Check for files stored within a cache pool that isn't allowed within a share's settings" implies it is, but it does not seem to catch such problems on my sever even when I set the plugin to always spin up disks for checks.
  7. Thanks itimpi. This behavior is a perfect example of what is documented. I now vaguely recall seeing advice to copy/delete over moving now, but it looks like the tutorials I view still uses move operations in Krusader. Doh! I guess I will need to be more mindful of this going forward. I wish there was some kind of mover invocation/behavior that would resolve files stranded on a cache pool. Yes, I could change cache settings of the destination temporarily, but a share can only have one cache pool so it fails as a general solution and requires further follow-up for full resolution. Will I also see this behavior if I do the move via SMB?
  8. FYI, if you want to get really cheeky:
  9. Haswell can support 10 series cards just fine. Looking at AoE2, it seems like it didn't have specific GPU requirements so any GPU you can pass through should be fine. I would troubleshoot your pass-through problems first. You will never get going if you can't get the GPU to the guest OS.
  10. E5 systems are known to be moderately power hungry at idle. Newer hardware will typically be more power efficient. Honestly the jump from Sandy Bridge to Skylake will only save 10-30 W idle looking at references I see. Dropping the dGPU is an easy win though. The 6500 is a die shrink and you will loose 2 cores migrating to it. It also offers (an old version of) Quick Sync for transcoding. Dropping the dGPU and using a iGPU for live encoding would further reduce power needs. Keep in mind that Nvidia hardware encoding was typically better quality than Intel's for a given year. You may notice a drop in quality. In addition to loosing 8 threads, you will also need to drop ECC moving to the 6500. You may notice less horsepower if you do migrate. Honestly, if you are married to Skylake, I would give a look at finding a 6300. That gen was weird since the i3 cannibalized the low end Xeon market.
  11. Critical question is does he want to do live transcoding with Plex. Live transcoding is best done with some kind of hardware support (which is still improving every generation). A newer system also allows more resources to handle scope creep. However new rack mount systems get pricey fast. Especially if you wind up with an SAS backplane. The described workload is NAS+Plex hosting. If they don't need live transcodes, anything in the past decade and a half can do that and a second hand server starts looking real appealing.
  12. Need more specifics: Is this just a Plex host or is it also responsible for storage? How much storage? Do you need live transcoding? Regardless, this topic comes up regularly. Please do a search first. A good low effort starting place is a i3 Intel ITX system with no extension cards: Intel iGPUs have decent hardware transcoding ITX boards have less real estate to power
  13. I was hoping for at least a negative response. It seems like this could be addressed by recompiling a custom kernel, but I am unfamiliar with the process (though tech proficient in general). Can anyone advise for/against such action? I would like this feature so I can mount an SD card formatted on the Steam Deck as an unassigned device. If someone knows of an alternate workflow that would realize this I would love to hear it.
  14. I apologize for being on a lagging version. I have not allocated the down time to update yet. Conditions: Source share includes Disk 2 and has prefer cache set Destination share includes Disk 1 with no cache Use Krusader to transfer file(s) from the source share to the destination share Observed Behavior: Files are available in the destination share Files are located in source share's cache (along the expected path for the new share) Files are absent from the disks included in the destination share Mover does not appear to move the files from cache to Disk 1. Double checked with manual mover invocation though scheduler. Expected Behavior: Moved files in this scenario should eventually wind up on the included disks of the destination share. I see several options to realize this: When files are moved to a share that has "cache: no" set, files will expunged from the cache (once they are in an included disk). Alternatively, mover should review all cache pools for each share for these kinds of remnants. Related behavior (hypothesis only. no observation) What happens when a share with files in cache drops the cache pool? finarfin-diagnostics-20230926-2222.zip
  15. I can confirm RGauld's findings. I also had Web GUI responsiveness issues that were resolved by (just) removing "Unraid Connect". I find myself wishing there was a disable plugin option now. I will probably keep it off this server since it is stuck behind a double NAT anyway. I don't see the value of the plugin if I can't remote connect.
  16. It is not currently possible to mount an ext4-formatted volume with the casefold feature. This appears to be because the CONFIG_UNICODE flag is not enabled in the kernel configuration.
  17. First off, apologies if this has been addressed. When I finally found an error to search for that appeared unique, I got no hits in the thread. I would like to mount a micro sd card formatted by the Steam Deck. This would allow me an easy way to read/write to it from my primary, Windows workstation. This use case rules out a reformat as a solution. When I try to mount the partition the plugin processes for a little bit, but when it completes the partition is not mounted and the mount button is still available. Turning on the Udev log I see the following: Jun 3 11:21:15 finarfin unassigned.devices: Partition found with the following attributes: serial_short='201606271856', device='/dev/sdj1', serial='Generic-_USB3.0_CRW-SD_201606271856-0:2', uuid='0a93b999-b1fa-4f02-ad29-230f6bb55249', part='1', disk='/dev/sdj', label='Generic- USB3.0_CRW-SD', disk_label='', fstype='ext4', mountpoint='/mnt/disks/Generic-_USB3.0_CRW-SD', luks='', pass_through='', mounted='', not_unmounted='', pool='', disable_mount='', target='', size='0', used='0', avail='0', owner='user', read_only='', shared='1', command='', user_command='', command_bg='', prog_name='', logfile='' Jun 3 11:21:15 finarfin unassigned.devices: Mounting partition 'sdj1' at mountpoint '/mnt/disks/Generic-_USB3.0_CRW-SD'... Jun 3 11:21:15 finarfin unassigned.devices: Mount cmd: /sbin/mount -t 'ext4' -o rw,noatime,nodiratime,nodev,nosuid '/dev/sdj1' '/mnt/disks/Generic-_USB3.0_CRW-SD' Jun 3 11:21:15 finarfin kernel: EXT4-fs (sdj1): Filesystem with casefold feature cannot be mounted without CONFIG_UNICODE Jun 3 11:21:18 finarfin unassigned.devices: Mount of 'sdj1' failed: 'mount: /mnt/disks/Generic-_USB3.0_CRW-SD: wrong fs type, bad option, bad superblock on /dev/sdj1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. ' Jun 3 11:21:18 finarfin unassigned.devices: Partition 'Generic- USB3.0_CRW-SD' cannot be mounted. It looks like the casefold feature is the is the root cause of the failure, but I am not familiar with it. A casual google search suggests that CONFIG_UNICODE is a kernel flag that needs to be enabled. Can someone confirm or correct my analysis? Do I need to open an Unraid feature request to address this?
  18. Sweating DGPU efficiency for an E5 system is putting the cart before the horse, no? If you want the best best possible DGPU HW transcoding, pick the cheapest SKU of the latest gen of your preferred flavor. HW transcoding improves in quality every generation, but does not improve with a bigger chip. The smallest possible chip with the support will probably be most power efficient. If you want to prioritize power efficiency over quality, do some research or switch to an IGPU. If you already transcode your library for space savings, you may not need HW transcoding. I find I really need to crush the stream bitrate before I trigger transcoding on my server. Alternatively, an E5 should have enough power to real-time encode a pair of streams on its own.
  19. Energy efficiency is often in opposition with power, and is affected by several factors: Workload: all else being equal doing more will take more power. Especially long running/frequent workloads that stop the race to idle. Architecture: newer process nodes tend to have better efficiency metrics. Peripherals: any PCB not contributing to your use case detracts from efficiency. This applies to motherboards (ITX > mATX > ATX typically). Number and type of drives affects this. Also don't forget to run the server headless. Downtime: spin downs and scheduled sleep cycles will reduce power needs at the cost of availability. If you want a lazy route to a powerful server that is fairly efficient: Workstation ITX motherboard with Intel I3/Xeon CPU (IGPU for HW transcoding) of recent architecture No expansion cards 16-32 GB RAM 1 NVME cache drive, 1 parity drive, 2-4 array drives. Array drives should be 5400 RPM class. If you want to take it a step further you need some uncommon components. There is a community somewhere on reddit that tracks low power Unraid builds.
  20. I would appreciate a second opinion before I throw together some hardware... My father recently passed. While he was ill, his NAS fell into disrepair (RAID has some bad SAS disks, it's unclear ATM if the array is lost; I want a offload target ready before I attempt data recovery). I am going to scavenge some of his unused hardware (with some new hard drives) to make a more appropriate sized, URAID based, NAS/light server for my mother that I am confident in remote administration/troubleshooting. I would appreciate any perspective into pro/cons of the different hardware I may have overlooked. Here are the motherboard options (all have suitable processors, but I would need to boot them up to review CPU info. New server will run: Array 4 sealed WD enterprise 4 TB drives. Will repurpose an old SSD for cache. Considering 1-2 new 6+ TB drives for parity. SMB shares Backup target Plex for streaming to PC and media sticks on local wireless (no live transcode or external hosting for now) BD+DVD MakeMKV to Handbrake auto rip + (CPU) transcode I am thinking the Haswell motherboard is the better pick. It should run leaner, faster, support more drives, and has USB3. The only pro of the Xeon motherboards is ECC memory. Is there a scenario where that can outweigh the cons that come with those systems? Thank you for your time and consideration. ASUS Maximus VI GENE: Neutral probably has a i7-4770K (stock clock) 8 SATA, 1 M.2 Pro Better expected power efficiency than the Xeon systems DDR3 RAM (at least 16GB, probably 32) Most on board SATA ports USB3 a i7-4770 typically produces 3.5x the benchmark score of a X5000 processor. Con No ECC Supermicro X7DAL-E: Neutral 2 LGA 771 Xeon Processors (X5000 chipset) 6 SATA Pro FB DIMMs have ECC (IIRC) 24 GB RAM Con Power efficiency (65 nm + FB DIMMS) Supermicro branch population is weird. (2 channels only have 1 DIMM each) No onboard USB3 Intel S5000PSL: Neutral 2 LGA 771 Xeon Processors 6 SATA Pro FB DIMMs have ECC (IIRC) 32 GB RAM Con Power efficiency (65 nm + FB DIMMS) No onboard USB3 Buy a new motherboard: Neutral I have a spare 2124g Pro checks all the boxes Con Workstation Boards for the 2124g are harder to find these days. Would probably be $500 for board and RAM and the delay would adversely impact the schedule I have.
  21. Hello, I've recently added a Windows VM to my server so I can use my server to help process some Avisynth scripts I'm working through. I've also got a nice pipeline where the VM will output the intermediate files to the Handbrake dockers watch folder which helps re-encode them with my desired settings and discard the intermediates before I pile up too many hours of lossless HD video. My frustration is that it appears that Handbrake is starving the VM of CPU resources. This renders the VM very laggy and non-responsive (though happily it has neither caused any errors nor starved the Handbrake pipeline). I would like to deprioritize the Handbrake container below the VM to restore some of the VM's UI responsiveness. I was able to make Handbrake play nicely with Plex via "cpu-shares", but that appears to only apply with respect to other dockers. I reject the idea of CPU pinning/isolation to address this because I would like Handbrake to use all available cycles when nothing else is running. Is setting Handbrake niceness the proper answer? How would that interact with cpu-shares? Thanks for your consideration. I apologize if this was addressed elsewhere but a search for unraid+"cpu-shares"+niceness gave few results. I also wasn't sure if this belonged in a more specific forum given it bridges a few areas. Unraid 6.9.2 jlesage/handbrake Windows 10 VM
  22. Are you still open to feedback about the severity of the self update check? I really did not need the adrenaline rush of seeing an error of the app being out of date. I don't think the plugin should consider itself any different from other plugins for out of date notices. Over escalating errors is a good way to have that issue type set for ignored. I find this unfortunate because now I will only notice FCP is out of date when I go to update another plugin. In my opinion, the only time an out of date app should result in an error is if it either leaves a known vulnerability open or has a known data risk. Both items are more nuanced to identify than I expect the app to be able to.
  23. 120mm fans will give better performance metrics in every category save space. It's roughly twice the fan area using 120 mm over 80 mm. Always seek to fit the largest fans your hardpoints allow.