NAS-newbie

Members
  • Posts

    56
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

NAS-newbie's Achievements

Rookie

Rookie (2/14)

10

Reputation

  1. I have 63W (at 7% with a T-series i9 CPU) but then I run two enterprise SSD for cache (U.2 NVMe), 3xPCIe sticks (idle at this point) and 4x20TB WD Enterprise HDDs (powered down right now). No GPU as I rely on the one in the CPU for my plex needs.
  2. Yes I also run with two 32GB modules at 4800 (and do does several people who have reported this in the thread) I but would like to know if anybody managed to run with 4x32GB and in that case with what modules and what speed...
  3. Still no experiences of running this board with the full 128GB RAM? Curious about what ECC sticks that works for this and what speed that was possible?
  4. I am using Core i9-13900T - probably total overkill unless you will run a lot of containers, have many users etc. but I will also use it as a build server and some other things and since the build anyhow became quite expensive I wanted some future margin on capacity rather than save some bucks on the CPU. It runs cool and does not consume a lot of power so has worked out nicely. If you really intend to push the CPU at times and want to avoid temp throttling go for a good cooler (when I pushed it really hard it went up to 200W+ according to the display on the UPS)...
  5. Sorry missed that "root cause" and yes it was indeed the problem - must say it is really unexpected and odd that this makes a difference - I would even go so far as saying the other (non-working) order is the more intuitive than the one that work.... To avoid more users having this problem I would really try fixing this and please also change the default - at least for a cache pool and probably even in general on UnRAID I would guess an alternative with redundancy is by far the most likely the desired (and the one with the least "catastrophes" risk if selected by misstake). An odd thing I noticed that even with my RAID0 cache pool the shares placed on it (cache only) was shown as "green" (that I interpreted as "protected") even though they was in reality highly insecure to a single disk failure. Due to this I did not notice the problem until I realized that the size of the cache array was to large for using mirror...
  6. Mine reads "1 group of two devices" and this is ok right? I can actually mention that about a week or so ago I managed to create a ZFS cache pool (I tried several things then also before it worked like first formating the disks for ZFS before adding them, leaving them empty etc.) so not sure how I did it to finally succeed but then I used "default settings" and was burned by same thing as customer in the referenced thread i.e. I got RAID0 (not a good default) and that was the reason I had to recreate the pool again (after all the usual steps to empty it etc) but now it does not work again and with these correct settings I can again figure out how to make it work....
  7. I have looked at space invaders tutorial and believe I followed that quite well - I did the following steps - create a new cache pool (name cache, zfs, mirror, two devices) and assigning two empty (no filesystem) disks. When I start the array the disks are expectedly shown as unmountable. I specify that I want to format unmountable array disks but this does NOT result in them getting partitioned/formatted instead it finished more or less immediately with no error message and the disks still "unmountable". I can add that I previously have had a BTRFS cache pool working perfectly on the same hardware and that I have a non-cache ZFS pool that was created without any issues. The hardware have worked perfectly up to now so I think the risk of hardware problems is very low (recently did extensive RAM test when motherboard was new, server have ECC memory etc). What am I missing?
  8. I need to copy files from an unassigned HD (was previously part of an array but I have done a new configuration with larger disks so uses it as "backup") to my UnRAID array but for a lot of directories and files I get the error message from rsync "failed to stat "/mnt/disk1/:somefilename Invalid or incomplete multibyte or wide character (84)" and when it is a directory that is the problem also "*** Skipping any contents from this failed directory ***" I assume these faulty files and directories (there are many hundred of them) was created by SMB from data on a Windows machine (I live in Sweden where we have åäöÅÄÖ and I can see from the failing names that it is the ones including these international characters). From the error messages it SEEMS from a few example that Ä in directory names result in \#303\#204 and ä results in #303\#244 while in ordinary files Ä is #216 and ä #204. This seems strange to me - i.e. do the files use more than one encoding?! As there are so manty affected files and directories spread out over the whole disk it is not very doable to manually rename them one by one and I instead need to find a way to tackle the encoding issue with a script and some utility working on character encodings. I have googled for these types of problems and it seems one can tell rsync to convert file names between encodings but then I need to know for sure what encodings I have and want and sadly I don't and have no idea about how to find out 😞 There is also a Linux utility "convmv" but once again I need to somehow figure out what encoding to encode from and to and also this utility in not available in UnRAID (and I cant find it in NerdTools either)... Sadly I know about nothing about character encoding in general as well as in Linux/UnRAID in particular so not sure what I need to do in order to fix this? Anybody with expertise on character encoding that can give me some tips on how I can figure out what my current and desired encodings are and how to best try solving the problem? I have not changed anything related to character encoding in UnRAID neither in SMB, when mounting disk or in general so all is "default". I run UnRAID 6.12.2.
  9. Yes could be the SATA cable I used... just like you say I rarely keep track of new or old rather just have a box with them and other disk accessories.... The slim SAS cable looked definitely more premium and more importantly was for sure new and not bent and folded many times as old cables may have endured over time....
  10. I would try ASUS support for the password problem.
  11. Happy to hear you got it working - I actually did some performance measurements and compared the 4 fixed SATA with the four over the slim SAS connector and as expected there where no difference in performance level (due to same protocol and nominal speed) but for some reason the performance variation over time in both bandwidth and latency was a little bit lower (i.e. less jitter) - perhaps there is another controller in the chipset or some other difference between them?! Anyhow I am using these slightly "better" slim SAS/SATA for my parity and largest/most used drives - this is partly a hobby project for me so making every little improvement I can think of is part of the fun 🙂
  12. Sadly I am not the type to document stuff like this but I have some wage recollection that there where one more thing in addition to the setting you mention that I had to change to make it work... sorry I cant be more precise 😥
  13. Just out of curiosity what is different from the format performed as part of the array build procedure to formating a disk with a supported file system under Linux on another machine or say in UnRAID as an unassigned disk? Are there some special metadata written, some special req on formating parameters or? What I was hoping to achieve was to be able to fill the disk before it was added to the array (I assume writes are slower in the array as one also need to maintain the parity but perhaps there is no significant difference?)...
  14. I am removing my old small drives and also switching the remaining drives to ZFS and to do this creating a new configuration. I also have one new drive that would like to use in the new array that already is formated with ZFS and contain data I would like to keep (and have included in the initial parity calculation). I tried to create a new configuration consisting of what should become the new parity drive and the already formated ZFS drive but when I start the array and start the parity calculation the new drive is for some reason marked as "unmountable" even though it is perfectly mountable as a "unassigned disk"...?! Is this because I have never done a "preclear" of this drive or what may be the reason? What is my best way forward in this situation?
  15. Can also say I so far really like the build quality and features of the motherboard BIOS etc. It is not an inexpensive board but in this case it seems to deliver what one pay for. And as you say, at least in Sweden where I live, it seems to be the only motherboard you can ACTUALLY BUY if you want a "raptor lake" CPU with ECC memory so not much of a choice.