jit-010101

Members
  • Posts

    53
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jit-010101's Achievements

Rookie

Rookie (2/14)

22

Reputation

  1. Had a quick glance into the code base. Honestly not sure if I want to do it - C++ or Qt aint my strongest contenders and they come with a lot of uglyness in that specific code base (many external dependencies, originally based on Qt5 which is Out-of-Support already). Specifically speaking from a low-level the customization features are also tailored for the Raspi-Bootloader - should be U-Boot as most SBCs as far as I remember. But then again this topic interests me quite a bit (in terms of learning) so I decided to create something from scratch in Rust (yesh): https://github.com/thiscantbeserious/usb-creator-rs Upside is I charge 0,00 - if you want to help with the planning and design feel free to contribute by feature requests. If it turns out useful / works as intended then I'll gladly rebrand / customize it for Unraid too. But no promises - I was initially planning to keep quite about it but here we are ... no pressure!
  2. yes absolutely, did a quick search but didn't find the topic for the requirements
  3. > assuming open to internet Unraid was and is never ment to be used being exposed to the Internet directly. You should always honor such design decisions - and work with that. Say run Unraid in a VM like on Proxmox in such cases. If you need internet exposure you can use a pull solution like Syncthing - if you want to expose services you have much better solutions based on Kubernetes/Portainer on hardened Server OSes with LTS Kernels which can run in a seperate VM and just use the NAS-Storage as a mount. Docker containers are still updatable, as it is too ... so the individual containers are never staying behind. It all sounds a lot more like that you want to use it in edge cases the easy way - that's not going to work out too well. Can't you simply downgrade to the last known stable and call it a day. Like outlined above: Unraid was never be designed to be exposed to the internet directly. There are shit ton of equipments with far older Kernels hidden behind propitary firmware with much bigger holes than that.
  4. I'm honestly not sure - maybe someone from the team can tell but if you're using pools and not the array it might not even use shfs. 🤔 I personally never encountered such issues myself - I'm "just" using it to host Paperless, Nextcloud (58k pictures/video), Homeassistant, Node-Red, Calibre and stuff like that ... and never had an issue. But this is all on an classic Array with BTRFS individual drives (ssd's only) of different sizes ... I've yet to use any form of Media-Streaming use cases myself ...
  5. Eh well, that explains it - server hardware and maybe an not so power efficient PSU all combined with 12 drives probably none of them in spin-down ... there is a LOT of potential to save energy here hehe (just not for FreeNAS). I remember that there was an issue related to *arr with shfs (the underlying technology used as a basr for Unraid merged folders) so I'd highly recommend you to test the trial to the fullest to be able to face any issues and get yourself an idea what you might need to do.
  6. Right now on mobile but my second backup server is running on Unraid too and is 100% ZFS on Spinning Rust. But I'm not using RaidZ - but invidiual ZFS drives in the array with a parity drive. That gives me the ability to do snapshots and enables the drives to spin down - especially when combined with the Folder Caching Plugin (which reduces Reads from the HDDs by caching recently/often used folders into RAM). ZFS single drive array has the downside of no automated bitrot protection on the fly and I wish RaidZ support would be already there especially with multiple Arrays that would be even more powerfull - as I understand it that's planned for later. Right now ZFS does have some downsides, like if you are manually creating folders within your shares via commandline - this is not handled automatically and can lead to weird problems / dataloss (yet). All that in mind: What the heck is wasting away those 200W? Both my servers use less then 60W together ...
  7. I had two Plus Licenses - just Upgraded one to Pro. I can totally understand the change to the licensing model and will still purchase additional licenses as needed - with subscription fees and will still recommend it to anyone looking for a server/NAS/media OS. There needs to be a steady income flow for you guys, to continue founding and development - otherwise there will be no more versions at a point in time. I'm 100% sold and satisfied that Unraid is a much better general use home server system then any of the other options I've so far seen and used - for my use case (like, for example DSM - or heck much more complex setups like with Proxmox/k8s/or any of the countless server distros I've been trough in my years).
  8. IcyBox uses SATA-Controllers connect with crappy USB-Controllers that are specifically troublesome on linux - thats also why they dont support Linux. I have one too and its simply unfixable by Hardware-Design. If you can send it back, otherwise sell it and use plain and simple SATA, if you cant eSata, or if you really need to use USB - look into Terroristen because they support Linux officially or are at least using hardware that works properly without any bigger bugs (from the countless reviews I've seen, not by my own experience). Then again, I won't recommend USB for your array unless you really can't avoid it ... as a backup target or pool it might be fine ...
  9. It could be your NVMe: Jan 10 10:23:37 Tower kernel: BTRFS info (device nvme0n1p1): relocating block group 2234621362176 flags Jan 4 16:18:00 Tower kernel: BTRFS info (device nvme0n1p1): device deleted: missing But there's also a SEGFAULT and OOM @ Plex in your syslog above ... Seeing this is an N5105 maybe its related to a stuck C-State, or a BIOS-Setting making this unstable. Is this a NAS-Board perhaps or an Odroid H2/H3? Before looking into the Board/CPU that I'd try to really eliminate the typical culprints properly - Memory first. I would personally run the Memory Test for at least 48 hours to see if it's not really missing anything ... some of them are really hard to track. Jan 3 00:20:01 Tower kernel: Modules linked in: xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet 8021q garp mrp bridge stp llc r8169 realtek i915 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper crct10dif_pclmul processor_thermal_device_pci_legacy processor_thermal_device drm crc32_pclmul mei_pxp crc32c_intel ghash_clmulni_intel sha512_ssse3 processor_thermal_rfim processor_thermal_mbox sha256_ssse3 sha1_ssse3 mei_hdcp intel_rapl_msr wmi_bmof aesni_intel intel_gtt processor_thermal_rapl i2c_i801 nvme crypto_simd intel_rapl_common cryptd agpgart int340x_thermal_zone intel_cstate i2c_smbus ahci mei_me Jan 3 00:20:01 Tower kernel: i2c_core libahci intel_soc_dts_iosf nvme_core syscopyarea mei sysfillrect sysimgblt tpm_crb iosf_mbi thermal video fb_sys_fops fan tpm_tis tpm_tis_core wmi backlight tpm intel_pmc_core acpi_pad button unix [last unloaded: realtek] Jan 3 00:20:01 Tower kernel: ---[ end trace 0000000000000000 ]--- Jan 3 00:20:01 Tower kernel: RIP: 0010:unlink_anon_vmas+0x67/0x137 Jan 3 00:20:01 Tower kernel: Code: 6b 08 48 89 ef 49 8b 75 00 e8 42 e5 ff ff 49 8d 75 50 48 89 df 48 89 c5 e8 7c 42 fe ff 49 8b 45 50 48 85 c0 75 0a 49 8b 45 48 <48> ff 48 38 eb 29 48 8b 43 18 48 89 df 48 8b 53 10 48 89 42 08 48 Jan 3 00:20:01 Tower kernel: RSP: 0018:ffffc9000e6e7ce8 EFLAGS: 00010246 Jan 3 00:20:01 Tower kernel: RAX: ffff0081049f1068 RBX: ffff8881e4f9a3c0 RCX: ffff8881e4f9a3e0 Jan 3 00:20:01 Tower kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 Jan 3 00:20:01 Tower kernel: RBP: ffff8881049f1068 R08: 000000000000000c R09: ffff88822e6d1800 Jan 3 00:20:01 Tower kernel: R10: 0000000000000001 R11: ffff88822e6d1808 R12: ffff88821f5ea990 Jan 3 00:20:01 Tower kernel: R13: ffff8881049f1068 R14: ffff88821f5ea9c8 R15: dead000000000100 Jan 3 00:20:01 Tower kernel: FS: 0000000000000000(0000) GS:ffff88856c080000(0000) knlGS:0000000000000000 Jan 3 00:20:01 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 3 00:20:01 Tower kernel: CR2: 000014dc800f31e0 CR3: 0000000195136000 CR4: 0000000000350ee0 Jan 3 00:20:01 Tower kernel: note: startCustom.php[14117] exited with preempt_count 1 Jan 3 00:20:01 Tower kernel: Fixing recursive fault but reboot is needed! In the end it can be also the Power Supply that's the third culprint I'd look at ...
  10. If I go back accross the posts here - shfs seems likey is only the layer it will crash in the end. See this report here: https://github.com/libfuse/libfuse/issues/589#issuecomment-857484504 libfuse is basically in life-support-mode - since 2021 This could likely hit all Nix & BSD systems using a FUSE filesystem in the end depending on the edge case they trigger. Interstingly in the same issue mergefs is mentioned: https://github.com/trapexit/mergerfs/issues/965 I think I encountered something simliar on Debian 9 with OMV, Snapraid + MergerFS myself which was much more serious in regard to file losses (which is why I moved to Unraid myself). So be careful if you try to use MergerFS/Snapraid (similiar issues) There are so many layers that can possibly cause these issues, most likely pointing towards that this could also be caused by shfs itself very well because Open-Source development of it has been discontinued for 20 years now (heh). The good thing with Unraid is that you can easily downgrade - and stay on a version that stays stable for you too. You dont have to upgrade. Things like this will happen with all Distros. Just trying to put some common sense here, I very much respect your decision / move too and also see the need for Unraid evolving more towards ZOL and possibly replacement layers ... I personally went the Synology -> Kubernetes + Distributed FS route before going back to OMV and in the end Unraid ... Enterprise solutions so full of issues flagged as stable that I consider this all a breeze in comparission (yea I know how that sounds, might be very well a user issue too 😅)
  11. Unbalanced is something I wouldn't recommend honestly if you value your files (sorry, just being honest). If you need a multi-threaded move that is much faster then rsync then use rclone copy ... you can install rclone via the Nerd Tools Plugin. Afterwards do a rsync for the proper permissions and be done with it .. Terminal only thought. Nothing Click & Point. In addition to that: Seafile is something you shouldn't use on Spinning Rust - and anything simliar unless you know what you're doing. You will need to control the block size/pack size and adjust it accordingly because any HDD will struggle with that amount of files. (I have lot more files by the way ... and never had that issue on any of my two Unraid devices so far). You shouldn't move files to /mnt/user/[shares] anyway unless you know what you're doing - its much smarter to target the individual disks yourself - that way you're also avoiding the overhead. So /mnt/disk[n]/sharename ... If you're using ZFS you might want to make sure the pool is created on the individual disk before doing so and that it's not just a plain folder ... if there are still troubles well then this might be a bug. --------- On-Topic: Besides this it all seem like sympthoms - not example the root cause. Try fixing an error you have no root cause for is like searching for a needle in the haystack (even more so if you dont have access to the hardware). So if moving to Ubuntu helped you - so be it. I personally (almost) lost a shit ton of stuff from to workstations running enterprise hardware running on Ubuntu LTS in the last two years due to fucked up updates - good luck with that. But then again these were powered by quite more modern hardware too ... so not comparable. Then again in the last 4 years I also had several issues with Ubuntu Servers and their Auto-Updates onlegacy hardware too ... If I'd recommend anything then it would be something like OpenSuse Thumbleweed - or a more "trustable" stateless distro. Good luck with your venture, and dont let the negative emotions eat you away.
  12. You'll have to turn off Turbo too (as mentioned in the powertop readme), and probably also have to limit PL1/PL2 power states - keeping PL1 within 32 seconds. The exact values you'll have to try yourself (i think mine are set to 5000 and 8000). If it doesn't reach C8 then it might be related to your nvme-drive ... have you tried removing that yet?
  13. That consumption levels seem to be quite fine - 20W for 4x HDD and 2x SSD is quite good. I do reach 15.8W with a lot of tuning an C8 on a N5105 board with the same amount of drives I'd say the difference is that you use 3x RAM-Sticks and maybe C6 vs C8 - and obviously the drives themselves. if you remove everything but one RAM-Stick you might be able to see lower consumption - have you also enabled GEAR2 for the memory ... that could give you a little bit more power savings too ...
  14. @scy01d @Marshsr Here you go ... no guide or no disclaimer for now how to reach C8 (I've started from scratch 6 times now and always reach it again). I did add a quick readme.txt to the root directory of the USB-Stick. You should unpack this to a clean FAT32 formated USB-Stick and simply reboot and it should flash everything ... Be sure to READ THE README first! Edit: ffs, tons of misspellings in the readme ... not going to fix that Edit #2: If you're scared of the contents and/or have your own EFI-Boot-Files ready all you need is basically the /EFI/R208_Mod.bin file jit-010101_20231122_bkhd_nas_5105_bios_mod.zip Edit #3: Here's my current build and consumption with this: