ixit

Members
  • Posts

    10
  • Joined

  • Last visited

ixit's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Necro necro! Bump! I'd like multipath as well.
  2. Thanks for the clarification on shares vs disks usage. It helped validate that this was the tool I wanted to use to try this, rather than an external one which would have inherent limitations. I thought I didn't need to do this and I was just checking. My process is very clean, so I shouldn't have needed to do this. Despite that, I still found a fair number of dupes. This was a good reminder of the old accountant's saying: 99% correct is 1% wrong. It doesn't mean your process doesn't work. It's a reminder of why there are checks and balances. I definitely think it's worth running occasionally, probably first on shares then secondly on disks. Observations: this takes a few gigs of memory with larger data sets just against filenames, and can probably easily consume even more with more complicated data sets or filters. It takes some time to process, so it's probably best to fire it off (before you run out of space ) and return later. Delete Dedupe with care.... Good luck. Edit: Most of these were simple name collision dupes (i.e. "Book 1") 😂🤦‍♂️but enough were real to be worth the effort.
  3. Intel Arc (and Arc Pro!) Better collective scrub scheduling - sequential, groups. While I would like better collective intelligence on executing the scrub (i.e. pause while there's io contention for parity) apparently scrub pause doesn't exist at the filesystem level. Oh wells! Better formal advice and tools on how to recover when you've balanced and "lost" space 😅
  4. I was up for nearly a year back around COVID while we were in that long time gap between versions, and then for awhile I didn't upgrade. I just didn't have a reason to restart, so I kept stalling the upgrade. At that point in time, I was using ESXi for my craziness, so that VM (or server, occasionally) would get the restart instead.
  5. 128GB DDR4 Multi ECC Will be upgrading to 128GB DDR5 or DDR5 ECC as soon as someone makes a right angle AMD board that isn't crap, and supports bifurcation for all the breakouts I want. I run 1-3 VMs, and I would use only 64 Gigs, but I like using the overhead as cache for transcode or other experimental nonsense without wear and tear on SSDs. Additionally, I keep my appdata on a separate optane AIC cache drive which does not transit files, so I can beat up my dockers as much as I want. THAAAAAANK you so much for allowing multiple uses for cache drives.
  6. When I had faced this issue upon a migration from Windows Server to BinHex, I had eventually identified that somehow my blobs DB had become corrupted - but only the copy that had been moved over to Unraid. My original copy was OK. I copied the unzipped blobs DB back over and overwrote it, did permissions tweaks and was right as rain. TLDR: Check the integrity of your Blobs DB as well. You may need to do this pragma integrity check manually, outside of the script.
  7. I see this setting now on Unraid 12.6.3. It seems to me that it would be extremely helpful to implement compression per user share, which seems to me to align more with how humans sort content, and then to be implemented as per path as the user shares are laid out on the underlying disks. Is this on the radar? While one could do this per disk and shuffle the media accordingly, that seems to add implementation complexity for the end user, such that adding an extra 500GB drive might be a solution one choses, rather than setting compression on an existing 8TB drive...by enabling compression on a share holding highly compressible books and documents. As such, approaching the feature with this user-share-centric thought may also reduce barriers to adoption. i.e. Disks 1-6 (each, individually would show...) FS BTRFS: Compression is not set per disk. The compression option is currently set to: Compression options available to User Shares. To set compression per disk, change your selection under the Compression options available to... setting under Settings-->Disk Settings. User share ebooks compression=yes User share familydocs compression=yes User share comics compression=no User share HEVC compression=no If possible, it may be wise to allow exclusively setting compression at the user level or at the disk level, where setting one disables the other, to avoid contested settings. IMHO setting them at the share level adds complexity, but I would hope to reduce unnecessary compute. I'm very new to the concept, just trying to save space on a lot of compressible data, with a fair amount of data I can't compress in the wings, and I know for me, at least, it would seem to help.
  8. This seems uncommon and old, so I thought I'd add that I've just run into this error on 6.12.1 and am attempting to resolve it with step by step escalations so I can try to pin down what's going on. I am using a SwissBit 8GB PSLC industrial high endurance drive (SFU3008GC2AE1TO-I-GE-1AP-STD). After a reboot (nothing forced), I'm now bootlooping - but pressing enter is required each time or it stays stuck. I will pursue the plugin route. I will update this post as well as later crosspost to 6.12.1 if relevant. I was at: device descriptor read/64, error -110 I resolved with: Delete Nerd Tools Plugin. FINAL EDIT: Deleted nerd tools plugin and directory. Booted fine.
  9. I spent a few hours on-again troubleshooting connectivity issues on my unraid server. Observed the following: eth0 - 10gig direct link eth1 - 10gig direct link eth2 & 3 - bonded 1 gig - intended default route to internet Can connect by IP on direct ethernet link across static IP interface with route defined Can connect by IP on DHCP on this general subnet (after I fixed the route from the prior setup which had locked me out) Can connect by IP on Static IP on this general subnet. Hostname: did not test Field Observations: Unraid Plugins: Unraid plugins failed connectivity Fix Common Problems: fix common problems could not download definition data Root cause: eth0 had the DNS applicable to eth0 set, but should have been set for the bond. Default routes were set correct, but traffic did not flow because of this misleading DNS entry I missed that the bond did not have it's own DNS. Requested fix: Disambiguate DNS in UX if it applies collectively; but ideally provide DNS lookup across interfaces such that my internal-only network can lookup hostnames, and so can I look up google.com. I may be misinterpreting what goes where by like... a long shot. Bottom line I'd like more clarity on how the DNS is handled in the UX. I think there's also some eth0 preference issues at play here, due to my ill-advised use of eth0 and eth1 as dcs, rather than the primary connections.
  10. I experienced a similar issue on my Windows box and diagnosed it as CHIA-3417. https://github.com/Chia-Network/chia-blockchain/issues/3417 I followed all of the various approaches mentioned in this article, including the detailed one at the end utilizing taskkill and TCPview, but in the end I was only able to get things again working by completely nuking all chia files (including both /username/.chia/ and /username/appdata/local/chia-blockchain/) except for my config.yaml, then reinstalling. I would be extremely curious if we can find evidence suggesting that this is the same case on the docker, but I can't see that as my full node is on Windows.