jit-010101

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by jit-010101

  1. Had a quick glance into the code base. Honestly not sure if I want to do it - C++ or Qt aint my strongest contenders and they come with a lot of uglyness in that specific code base (many external dependencies, originally based on Qt5 which is Out-of-Support already). Specifically speaking from a low-level the customization features are also tailored for the Raspi-Bootloader - should be U-Boot as most SBCs as far as I remember. But then again this topic interests me quite a bit (in terms of learning) so I decided to create something from scratch in Rust (yesh): https://github.com/thiscantbeserious/usb-creator-rs Upside is I charge 0,00 - if you want to help with the planning and design feel free to contribute by feature requests. If it turns out useful / works as intended then I'll gladly rebrand / customize it for Unraid too. But no promises - I was initially planning to keep quite about it but here we are ... no pressure!
  2. yes absolutely, did a quick search but didn't find the topic for the requirements
  3. > assuming open to internet Unraid was and is never ment to be used being exposed to the Internet directly. You should always honor such design decisions - and work with that. Say run Unraid in a VM like on Proxmox in such cases. If you need internet exposure you can use a pull solution like Syncthing - if you want to expose services you have much better solutions based on Kubernetes/Portainer on hardened Server OSes with LTS Kernels which can run in a seperate VM and just use the NAS-Storage as a mount. Docker containers are still updatable, as it is too ... so the individual containers are never staying behind. It all sounds a lot more like that you want to use it in edge cases the easy way - that's not going to work out too well. Can't you simply downgrade to the last known stable and call it a day. Like outlined above: Unraid was never be designed to be exposed to the internet directly. There are shit ton of equipments with far older Kernels hidden behind propitary firmware with much bigger holes than that.
  4. I'm honestly not sure - maybe someone from the team can tell but if you're using pools and not the array it might not even use shfs. 🤔 I personally never encountered such issues myself - I'm "just" using it to host Paperless, Nextcloud (58k pictures/video), Homeassistant, Node-Red, Calibre and stuff like that ... and never had an issue. But this is all on an classic Array with BTRFS individual drives (ssd's only) of different sizes ... I've yet to use any form of Media-Streaming use cases myself ...
  5. Eh well, that explains it - server hardware and maybe an not so power efficient PSU all combined with 12 drives probably none of them in spin-down ... there is a LOT of potential to save energy here hehe (just not for FreeNAS). I remember that there was an issue related to *arr with shfs (the underlying technology used as a basr for Unraid merged folders) so I'd highly recommend you to test the trial to the fullest to be able to face any issues and get yourself an idea what you might need to do.
  6. Right now on mobile but my second backup server is running on Unraid too and is 100% ZFS on Spinning Rust. But I'm not using RaidZ - but invidiual ZFS drives in the array with a parity drive. That gives me the ability to do snapshots and enables the drives to spin down - especially when combined with the Folder Caching Plugin (which reduces Reads from the HDDs by caching recently/often used folders into RAM). ZFS single drive array has the downside of no automated bitrot protection on the fly and I wish RaidZ support would be already there especially with multiple Arrays that would be even more powerfull - as I understand it that's planned for later. Right now ZFS does have some downsides, like if you are manually creating folders within your shares via commandline - this is not handled automatically and can lead to weird problems / dataloss (yet). All that in mind: What the heck is wasting away those 200W? Both my servers use less then 60W together ...
  7. I had two Plus Licenses - just Upgraded one to Pro. I can totally understand the change to the licensing model and will still purchase additional licenses as needed - with subscription fees and will still recommend it to anyone looking for a server/NAS/media OS. There needs to be a steady income flow for you guys, to continue founding and development - otherwise there will be no more versions at a point in time. I'm 100% sold and satisfied that Unraid is a much better general use home server system then any of the other options I've so far seen and used - for my use case (like, for example DSM - or heck much more complex setups like with Proxmox/k8s/or any of the countless server distros I've been trough in my years).
  8. IcyBox uses SATA-Controllers connect with crappy USB-Controllers that are specifically troublesome on linux - thats also why they dont support Linux. I have one too and its simply unfixable by Hardware-Design. If you can send it back, otherwise sell it and use plain and simple SATA, if you cant eSata, or if you really need to use USB - look into Terroristen because they support Linux officially or are at least using hardware that works properly without any bigger bugs (from the countless reviews I've seen, not by my own experience). Then again, I won't recommend USB for your array unless you really can't avoid it ... as a backup target or pool it might be fine ...
  9. It could be your NVMe: Jan 10 10:23:37 Tower kernel: BTRFS info (device nvme0n1p1): relocating block group 2234621362176 flags Jan 4 16:18:00 Tower kernel: BTRFS info (device nvme0n1p1): device deleted: missing But there's also a SEGFAULT and OOM @ Plex in your syslog above ... Seeing this is an N5105 maybe its related to a stuck C-State, or a BIOS-Setting making this unstable. Is this a NAS-Board perhaps or an Odroid H2/H3? Before looking into the Board/CPU that I'd try to really eliminate the typical culprints properly - Memory first. I would personally run the Memory Test for at least 48 hours to see if it's not really missing anything ... some of them are really hard to track. Jan 3 00:20:01 Tower kernel: Modules linked in: xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet 8021q garp mrp bridge stp llc r8169 realtek i915 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper crct10dif_pclmul processor_thermal_device_pci_legacy processor_thermal_device drm crc32_pclmul mei_pxp crc32c_intel ghash_clmulni_intel sha512_ssse3 processor_thermal_rfim processor_thermal_mbox sha256_ssse3 sha1_ssse3 mei_hdcp intel_rapl_msr wmi_bmof aesni_intel intel_gtt processor_thermal_rapl i2c_i801 nvme crypto_simd intel_rapl_common cryptd agpgart int340x_thermal_zone intel_cstate i2c_smbus ahci mei_me Jan 3 00:20:01 Tower kernel: i2c_core libahci intel_soc_dts_iosf nvme_core syscopyarea mei sysfillrect sysimgblt tpm_crb iosf_mbi thermal video fb_sys_fops fan tpm_tis tpm_tis_core wmi backlight tpm intel_pmc_core acpi_pad button unix [last unloaded: realtek] Jan 3 00:20:01 Tower kernel: ---[ end trace 0000000000000000 ]--- Jan 3 00:20:01 Tower kernel: RIP: 0010:unlink_anon_vmas+0x67/0x137 Jan 3 00:20:01 Tower kernel: Code: 6b 08 48 89 ef 49 8b 75 00 e8 42 e5 ff ff 49 8d 75 50 48 89 df 48 89 c5 e8 7c 42 fe ff 49 8b 45 50 48 85 c0 75 0a 49 8b 45 48 <48> ff 48 38 eb 29 48 8b 43 18 48 89 df 48 8b 53 10 48 89 42 08 48 Jan 3 00:20:01 Tower kernel: RSP: 0018:ffffc9000e6e7ce8 EFLAGS: 00010246 Jan 3 00:20:01 Tower kernel: RAX: ffff0081049f1068 RBX: ffff8881e4f9a3c0 RCX: ffff8881e4f9a3e0 Jan 3 00:20:01 Tower kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 Jan 3 00:20:01 Tower kernel: RBP: ffff8881049f1068 R08: 000000000000000c R09: ffff88822e6d1800 Jan 3 00:20:01 Tower kernel: R10: 0000000000000001 R11: ffff88822e6d1808 R12: ffff88821f5ea990 Jan 3 00:20:01 Tower kernel: R13: ffff8881049f1068 R14: ffff88821f5ea9c8 R15: dead000000000100 Jan 3 00:20:01 Tower kernel: FS: 0000000000000000(0000) GS:ffff88856c080000(0000) knlGS:0000000000000000 Jan 3 00:20:01 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 3 00:20:01 Tower kernel: CR2: 000014dc800f31e0 CR3: 0000000195136000 CR4: 0000000000350ee0 Jan 3 00:20:01 Tower kernel: note: startCustom.php[14117] exited with preempt_count 1 Jan 3 00:20:01 Tower kernel: Fixing recursive fault but reboot is needed! In the end it can be also the Power Supply that's the third culprint I'd look at ...
  10. If I go back accross the posts here - shfs seems likey is only the layer it will crash in the end. See this report here: https://github.com/libfuse/libfuse/issues/589#issuecomment-857484504 libfuse is basically in life-support-mode - since 2021 This could likely hit all Nix & BSD systems using a FUSE filesystem in the end depending on the edge case they trigger. Interstingly in the same issue mergefs is mentioned: https://github.com/trapexit/mergerfs/issues/965 I think I encountered something simliar on Debian 9 with OMV, Snapraid + MergerFS myself which was much more serious in regard to file losses (which is why I moved to Unraid myself). So be careful if you try to use MergerFS/Snapraid (similiar issues) There are so many layers that can possibly cause these issues, most likely pointing towards that this could also be caused by shfs itself very well because Open-Source development of it has been discontinued for 20 years now (heh). The good thing with Unraid is that you can easily downgrade - and stay on a version that stays stable for you too. You dont have to upgrade. Things like this will happen with all Distros. Just trying to put some common sense here, I very much respect your decision / move too and also see the need for Unraid evolving more towards ZOL and possibly replacement layers ... I personally went the Synology -> Kubernetes + Distributed FS route before going back to OMV and in the end Unraid ... Enterprise solutions so full of issues flagged as stable that I consider this all a breeze in comparission (yea I know how that sounds, might be very well a user issue too 😅)
  11. Unbalanced is something I wouldn't recommend honestly if you value your files (sorry, just being honest). If you need a multi-threaded move that is much faster then rsync then use rclone copy ... you can install rclone via the Nerd Tools Plugin. Afterwards do a rsync for the proper permissions and be done with it .. Terminal only thought. Nothing Click & Point. In addition to that: Seafile is something you shouldn't use on Spinning Rust - and anything simliar unless you know what you're doing. You will need to control the block size/pack size and adjust it accordingly because any HDD will struggle with that amount of files. (I have lot more files by the way ... and never had that issue on any of my two Unraid devices so far). You shouldn't move files to /mnt/user/[shares] anyway unless you know what you're doing - its much smarter to target the individual disks yourself - that way you're also avoiding the overhead. So /mnt/disk[n]/sharename ... If you're using ZFS you might want to make sure the pool is created on the individual disk before doing so and that it's not just a plain folder ... if there are still troubles well then this might be a bug. --------- On-Topic: Besides this it all seem like sympthoms - not example the root cause. Try fixing an error you have no root cause for is like searching for a needle in the haystack (even more so if you dont have access to the hardware). So if moving to Ubuntu helped you - so be it. I personally (almost) lost a shit ton of stuff from to workstations running enterprise hardware running on Ubuntu LTS in the last two years due to fucked up updates - good luck with that. But then again these were powered by quite more modern hardware too ... so not comparable. Then again in the last 4 years I also had several issues with Ubuntu Servers and their Auto-Updates onlegacy hardware too ... If I'd recommend anything then it would be something like OpenSuse Thumbleweed - or a more "trustable" stateless distro. Good luck with your venture, and dont let the negative emotions eat you away.
  12. You'll have to turn off Turbo too (as mentioned in the powertop readme), and probably also have to limit PL1/PL2 power states - keeping PL1 within 32 seconds. The exact values you'll have to try yourself (i think mine are set to 5000 and 8000). If it doesn't reach C8 then it might be related to your nvme-drive ... have you tried removing that yet?
  13. That consumption levels seem to be quite fine - 20W for 4x HDD and 2x SSD is quite good. I do reach 15.8W with a lot of tuning an C8 on a N5105 board with the same amount of drives I'd say the difference is that you use 3x RAM-Sticks and maybe C6 vs C8 - and obviously the drives themselves. if you remove everything but one RAM-Stick you might be able to see lower consumption - have you also enabled GEAR2 for the memory ... that could give you a little bit more power savings too ...
  14. @scy01d @Marshsr Here you go ... no guide or no disclaimer for now how to reach C8 (I've started from scratch 6 times now and always reach it again). I did add a quick readme.txt to the root directory of the USB-Stick. You should unpack this to a clean FAT32 formated USB-Stick and simply reboot and it should flash everything ... Be sure to READ THE README first! Edit: ffs, tons of misspellings in the readme ... not going to fix that Edit #2: If you're scared of the contents and/or have your own EFI-Boot-Files ready all you need is basically the /EFI/R208_Mod.bin file jit-010101_20231122_bkhd_nas_5105_bios_mod.zip Edit #3: Here's my current build and consumption with this:
  15. Backup-Server: Fractal Node 304 Case - Fans replaced for 2x Noctua NF-B9 Redux 92mm/3-Pin @ Front and 1x Noctua NF-A14 140mm/4-Pin/PWM at back connected to PWM-Pin on Mainboard Mini-ITX BKHD-N5105-NAS Board with 6 Sata Ports On-Board (ASM1166) with Modded BIOS 32GB (2x16GB) Crucial RAM -Kit CT2K16G4SFD8266 - DDR4 2666MHz CL19 200W Inter-Tech 88882190 PicoPSU + 12V/120W Power-Supply (No-Name) Drive Temps are down to <25 °C accross everything now after the NF-B9 upgrade - the NF-B9 Redux are working lovely in this case ... Array - Spin-Down 15 Minutes, Mover enabled each 8h + Mover Tuning Plugin to Move files > 2 days old: 2x18TB Seagate X20/18TB - ST18000NM003D 1x10TB WD White-Label @ 5400RPM - WDC_WD100EZAZ 1x8TB >SMR-Hell< Seagate Archive - ST8000DM004 ZFS-Mirror - Cache + System (Docker+VM): 1x500GB NVMe - Samsung_SSD_970_EVO_500GB_S466NX0M914394B 1x512GB SATA SSD - Crucial_CT512MX100SSD1_14300CB78BC7 ~15.8-16.8W currently in spin down over the day, semi-idle - goes up to single disk non-spin-down (18TB) ~20W and at average ~45-48W at load multiple disks, with very short peaks to about 60W ... (there is also staggered spin up if you enable the sata options and set spin down to just one drive on but I didn't verify that lately, might be interesting for PicoPSU users ...) Hourly backups are stored via Restic to the SSD-Cache - running restic-rest-server Keep in mind thats only possible with a modded bios that unlocks all options to enable GEAR2, enable ASPM,... and properly makes it go to C8 Apparently there's also an ECC-Support option to enable (which is seemingly enabled by default, maybe I accidently did that) - that makes no sense but I also do not have an UDIMM SO-DIMM ECC-RAM myself so I really didn't try if it works ... I doubt it does. Write/Read speed averages at around ~200MB/s I'll share the BIOS later on in a seperate thread going onto the specific settings ... so that others might be able to reproduce it Edit - BIOS shared here for the time being:
  16. I'm a little bit confused - wasn't the proposal to disable secondary storage? Or otherwise just swap them - so that would mean as soon as the mover is triggered it would still move files it isn't supposed to do (just in the other direction). Does the ignore file-list support RegExp/Wildcards? The extension-list doesn't ... already tried that. Isn't that unfesible if you want to always read from both primary and secondary storage? Say you have some movies stored on SSD and some on HDD - and you want to keep it there For that disabling the mover per share might be usefull is all I'm saying to begin with ... because with the current options of this plugin you can only go with so many parameters and options as far as I can see it. But not that very simple use-case that simply goes by disabling the move for everything for this share alone (no matter how you call it to be honest)
  17. Because that disables the shfs layering too (/mnt/user/[share]) so you'll have to choose which disk to write on manually e.g. and completely disabling read access by that file path - e.g. in docker containers. (at least that's how I understand how unraid works, correct me when I'm wrong, but please read till the end even if that part is not completely accurate - I've been using unraid just for a few months at best so my insight is a little bit limited) Well better said it does not disable the path completely but its just pointing at the primary storage only, not giving you the option to build on the concept that unraid invented, just with your own logic (with your proposale you'd basically have to write to the specific disks manually - and they're not part of /mnt/user/[share] so if you mount that into a docker container you could not access all movies f.e.). That's where the second part comes into play -> custom mover logic -> say not completely disabling it but basically enabling us to run a custom shellscript on the move trigger, similiar to how you likely do it already for "files older then" mover, which - as I understand it - triggers a custom script you shipped already (?). Use cases could be - write to SSD as primary storage, trigger custom mover script which will just move some files to HDD Same can be inversed - like primary storage HDD, trigger custom mover script which will just move some files to SSD For example for movies that get frequently streamed often (e.g. kids watching a specific movies x times a week) This could be used for example for frequently access files, since atime is not enabled by default so people can implement their own custom move-logic, be it enabling atime for their zfs datasets, or having their own toolings around how/which files are frequently accessed. I proposed it like this, specifically because atime is not enabled by default in unraid.
  18. No I understood that correctly - but this would automatically move every single file from the Array to the Cache. Which is not what I would like to do, since it doesn't fit my use case. Same goes for disabling secondary storage - because that's actually really usefull if you want to manually move some files but not everything. This is also why neither of your approaches would really work in this case. Right now you can disable the Mover globally, doing so on a per Share level too would be massively helpfull is all I'm saying. Pair that with an script option that is run when the Mover is triggered when enabled would be even more usefull.
  19. Not possible, obviously thats the first thing i tried and looked for - share storage settings should be left untouched and makes no sense to disable it because thats serving as the very base layer of movers.
  20. Is there a way to stop/disable the mover completely just for a specific share - similiar to how you disable it globally? or even better to execute a custom script instead, or both combined ?
  21. I remember reading about that issue with the exact same PSU somewhere - basically how he solved it was unscrewing the front-USB-platine and cutting off the half left which is by default unpopulated. I have seen pictures of it ... just can't remember where ... maybe it helps by first trying to remove it and see if the PSU fits before you try that ... Edit: Hmn no, after I checked your build thread - seems like that was an SFX-L PSU not an full sized ATX ... I think i was refering to the SF450/750 SFX series from Corsair instead.
  22. I do use the Node 304 right now, and while a lot of points are true, the fans it comes with aren't that loud to be honest. However they also dont have the highest static pressure but I'm still using the front fans. Just replaced the back one with a Noctuna PWM and thats more then sufficient for me ... front-fans run at lowest (or was it mid?) speed 100% of the time and I hear nothing ... ---- Ever since I switched to a PicoPSU 50% of the case feels like waste of space to be honest. So I'm thinking about moving lots of SSDs in there. Drive and CPU temps however around ~32-35 °C, really almost never going over 40° even in the summer ... airflow is quite ok in that regard (albeit, not perfect). However your point about fitting it in a shelf is 100% spot on - its really long. To long for what it could optimize for - similiar to the N2. The N2 seems to have perfected that form factor - almost ... just barely, if it wouldn't be missing that 1 drive slot. They could've saved another few centermeters simply with the front design or how they integrated the PSU - that is a also why I was opting to get a N2/N3 but budget wise I don't have the money to spend on right now and also form a practical side I don't really know if it makes that much sense, given that the Node 304 offers more potential space for additional SSDS ... I wonder why Lian-Li is no longer release any good NAS cases - like the pc-m25, which could fit a mATX motherboard just fine ... there are lot of cases nowadays which a build for overclockers - thats for sure.
  23. Yep - towards Firefox or Edge, at least as a second browser to fall back to Yea well it could - Wireguard is included with Unraid via Settings -> VPN Manager ... if you want that even easier there is also a Tailscale plugin which basically a Peer 2 Peer Wireguard Network spanning accross your "tailscaled" devices ... I find the later one really usefull for example if I have to take care of maintenance of my parents devices.
  24. This sounds like a classical RTL8125B load issue - they are known for random disconnects and packet corruption ... otherwise I see no reason why SMB/NFS should really crash. Thats just a wild guess tought ... but nothing unusual with these NICs (which also come with C-State sleep issues with various Kernels). I'd try to rule that out personally first by using another NIC that's less problematic, specifically if you have lots and lots of transfers coming in. Otherwise you've left out some important specs that will enable one to really judge what be might at fault here. For example the Cache Drive(s) might be also a bottleneck - depending what you use, or RAM ... but it could also be an issue with the BTRFS-Storage itself ... Or are you perhaps using something like an Odroid H3 here with an onboard NIC without PCIe ports? I know why I'm staying away from these really tempting boards ... going as low a 1.3W standby but then again when you throw something with load there's lots of issues to discover. Potentiall I had the Odroid H2 in the past and had power issues which fried two expensive enterprise SSDS and the 5V rail on the board. Go straight with 10GBe SFP Intel or an old 1GBe RJ45 Intel - 2.5GBe NICs are all potential issue-monsters, both from Intel and Realtek.