b0m541

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by b0m541

  1. see the followig article: Solution proposal: One could increase the nfs options text field size limit approriately, or (preferred) allow to have several lines per export to completely remove the limit, i.e. allow for adding and editing several lines.
  2. I am trying to export a given NFS share to a number of networks, basically 2 groups of networks, and each group has their own options. Here are the problems I ran into: (A) the text field used by the UI limits the number of characters that it can take. If I have to enumerate the options for each network, the whole thing does not fit into the text field. I see two approaches, but they do not work: (1) group the networks so that the options only need to defined once per network group - this seems only be supported for NIS groups, in the NFS export man page I did not find a way to somehow group networks and then define options per group Example: network1,network2(options1) network3,network4(options2) network5(options3) Note that the grouping by using comma separated networks is not supported by NFS (2) separate options into (a) global options that shall be applied to all networks in the list and (b) per-network options that deviate from the global options. The probleme here is that the unraid GUI already defines global options that you can see in /etc/exports: -async,no_subtree_check,fsid=123 Example: -globaloptions network1(options1) network2(options2) network3(options3) in /etc/exports it will then write: "sharename" -async,no_subtree_check,fsid=100 -globaloptions network1(options1) network2(options2) network3(options3) It seems that NFS will then ignore "-globaloptions" as there are already global options followed by a space character, so that you cannot append your own global options to them. Do you have some ideas how to overcome this limitation while not bypassing the GUI and doing some "hacks". I am looking for a way to express different options for a number of networks in a more compact way that still fits into the line limit of the GUI text field.
  3. Dashboard: 213 GB of 500 GB used (42.5%) Krusader summing up content of /mnt/cache: 167.4GB btrfs-usage.txt: Pool: cache Overall: Device size: 931.52GiB Device allocated: 402.06GiB Device unallocated: 529.46GiB Device missing: 0.00B Used: 394.72GiB Free (estimated): 267.72GiB (min: 267.72GiB) Free (statfs, df): 267.72GiB Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- ---------------- --------- -------- -------- ----------- 1 /dev/mapper/sdk1 199.00GiB 2.00GiB 32.00MiB 264.73GiB 2 /dev/mapper/sdj1 199.00GiB 2.00GiB 32.00MiB 264.73GiB -- ---------------- --------- -------- -------- ----------- Total 199.00GiB 2.00GiB 32.00MiB 529.46GiB Used 196.00GiB 1.35GiB 48.00KiB Having read these information it doesn't get any clearer to me...
  4. The unraid dashboard reports that 239GB of my cache are used, in Krusader summing up /mnt/cache gives 165GB. How come? (using a btrfs raid1 on 2 SSDs as cache)
  5. OK having seen the video and read the Cache issues section in the forum FAQ on unRAID V6 (https://forums.unraid.net/topic/46802-faq-for-unraid-v6/) I realize that my understanding of the RAID1 cache was limited/wrong. What I actually wanted to have was XFS on top of 2 RAID1 SSDs, but now it seems to me that this is not possible using the UI. Is it correct that the RAID1 setup of the cache is a feature of BTRFS and cannot be done (on UI level) with XFS?
  6. Thanks! (I never did use auto start) Will come back if something seems fishy.
  7. Thanks, it old / new are as follows: old: bare metal, IT mode controller directly connected to SATA drives new: bare metal, Supermicro A2SDi-8C-HLN4F onboard controllers connected to hotswap backplane, to which the drives are connected Will the change from IT mode controller to the onboard controllers of the new board create extra work, and if so, what to do?
  8. I am planning to replace my unRAID hardware - except for the drives (incl. the usb boot key). Everything else will be replaced. I assume such a stunt has already been done by many people before. Is there a community tutorial or limetech documentation guideline on how to execute such a system migration? I am thankful for any of your pointers in that direction.
  9. A long time ago I thought BTRFS was a good idea and set up all new hardware drives using BTRFS. Later I changed my mind and wanted to convert everything to XFS. Over time I converted all data drives back to XFS. However, the cache is still using BTRFS and I would like to change that. Situation: - Cache contains several shares that are held on the cache Only for performance reasons - Cache is implemented on a pool of 2 SSDs Questions: - Do I need to convert to XFS I first need to empty the cache pool and then reformat it to XFS? - How do I empty the cache from share data? There are different kinds of shares: - Yes: is it right to just run the mover once to move this to the data drives? - Only: is it right to firstly change these shares from Only to Yes and after that run mover once? - Is there anything special to consider when reformatting SSD pools? - After reformatting to XFS, set the shares which were formerly Only to Prefer (from Yes)? Run Mover once to move the content of these shares back to the cache? Then set the shares back to Only (from Prefer)? Is this procedure correct? Is there somewhere in the official or the community documentation a procedure described for this adventure?
  10. If you do not see what I am talking about we just have different risk models at hand. That can happen. Good for you.
  11. Yes i could and I have, above and other times before, but its not of use as it contradicts the paradim under which the diags are collected. Concrete examples: user names, group names, drive IDs, ... If you give me diags and you give me a number of physical machines, I will always be able to identify the machine to which the diags belong. As long as we cannot avoid this, some people will not be willing to post diags in public.
  12. Yes i have. They have a lot of non-anon data, e.g. account names, and any combination of that many tech parameters would identify each system individually. It is a catch 22 if you make full diags your primary data source instead of guiding the user by educated questions.
  13. Fair enough and both understandable. I am still not comfy with posting diags in public and never will be.
  14. Ulitmately it is the user's data, maybe the user should decide Or: offer to look the diags that the user puts somewhere by using the URL they provide. There is no technical need to publish the diags publicly basically forever. What about that?
  15. I am sure there are always good intentions behind that, otherwise someone would not take the time to respond. Assuming that each user knows as few as possible and thus to get as much data as possible is a way to reduce back and forth, I get that. Just another example: Have you ever called a technical hotline and been treated like you know as few as possible ("have you turned on the device?", "is the power plug in?"). how does that feel? (I am not saying this is what happened here, but it is emphasizing the underlying assumption that someone doesn't know anything about the problem domain) I guess its impossible to know what someone knows when asking a question, but the person does not know the answer and the person may know a shitload more than you assume. I personally prefer to get targeted questions to provide information on point that is needed. it also ensures that I learn how to solve this problem on my own the next time. Just my 2 cents.
  16. Something I did not know before asking (which is why I asked) is that the parity will be rebuilt after successfully running "New config". Next time I know it. To be honest I do not see what a look at the diags will change about that: The question was how a parity rebuild can be manually triggered. And that can be answered with "run new config" no matter what the diags are saying. I perceive a tendency of some people primarily asking for diags, even if the answer to a question does not depend on the data in the diags, and rather not answering a question that could be answered easily and correctly without looking at the diags. People from different places have different notions of privacy, and for some - including me - needing to publicly post diags to get an answer to a question that can be answered without diags feels not right. To me it seems disproportionate, similar as when asking a physician which side effects a certain medication can have and the physician not answering that question before you "please get undressed except for the underpants". Of course the example is exaggerated, but here everything is public and archived for a long time, so that provision of information requires more consideration. Coming back to the concrete case, providing a screenshot is less of a privacy problem and I would have done that, had I not by then found a solution on my own.
  17. Fair enough, but to be also fair and clear, I also asked how the parity build can be manually triggered. That question could have been answered without access to the diags. That would have been greatly appreciated. Just my feedback.
  18. Due to lack of options I repeated this procedure: https://wiki.unraid.net/index.php/UnRAID_6_2/Storage_Management#Reset_the_array_configuration This time the result was as described in the Wiki, the content of the parity drives was invalidated and a parity rebuild started when I started the array. I do not understand why that did not happen the first time, but now it works.
  19. As far as I can see I can only start a Read Check, which is peculiar. The Wiki says: "A Read Check is also the type of check started if you have disabled drives present and the number of disabled drives is larger than the number of parity drives." (https://wiki.unraid.net/Manual/Storage_Management#Read_check) Since I have 2 parity drives and I unassigned 2 drives I should be able to run a parity check or parity rebuild? This is what my original question was about: May i unassign 2 drives at the same time and then run a parity rebuild once? The answer was positive. Obviously it does not work like that.