NLS

Members
  • Posts

    1253
  • Joined

  • Last visited

Everything posted by NLS

  1. OK clear. I still can only hope #1 gets "properly" implemented for disaster situations some time. In my case, I don't want to destroy my parity yet, as it is yet undetermined if the disk will recover, cloned or what.
  2. Experts in the room: Is it possible to achieve #1 partially, by going to maintenance mode and then using unassigned devices to mount cache? (and not mount the other disks at all) Will I be able to then start some containers and vms? Also I think UD also allows to mount as read-only. Can I use that to access my remaining data without affecting parity?
  3. I know - I even have a couple of test VMs on the array on purpose. This won't "hurt" though, just those will fail if started. It is a very useful function to have for people that need it. Ability to work in production environment even partially is very important.
  4. Maybe you guys are right. That said, the first idea is super helpful and I really hope it gets added. (and is not destructive - even if array is fully normal, the only difference the user would notice is that the shares are read only) It is actually an in-between of normal operation and maintenance mode. Maintenance mode doesn't mount any disk or cache. This one would mount every disk available in array (and start it even with disks and data missing), but read-only (so that parity is not killed), so that people can access important (remaining) data AND cache (and normally domains, system and appdata - if they still reside in cache) read/write so that VMs and containers can start.
  5. You confused me again. To do the test if disk9 is emulated properly, I need a disk assigned as disk9? Can't I leave it missing just for the test? (since I will really be missing it, until I get an empty new one) Ah! I understand. Since I make a new config, I cannot make a new config with "missing" disk9 if I don't assign anything to position 9. It is just for the first start, to consider disk 9 initially part of the array (without looking at the contents) and then make it "missing" to see if it is emulated properly. I get it now. My only issue with this is: If I get a new empty disk9, add it to new array just as maintenance (just to make it "missing"), bring it offline, check if emulated (old disk9 contents)... and then bring it back to array... How will the array know to sync back disk9 contents? (instead of just building parity using the empty data of new 9?)
  6. I have this post made: ...where it leads me to think of two things missing currently from UNRAID that would be very nice to be implemented (and able to be used directly from the GUI). 1) Actually implement a "read-only and bring online as is" mode. So if disks (more than parity recover ability) are missing, their data will be missing (as they cannot be emulated), but parity is not touched, pending the disks actually coming back later. This feature, would be great for temporary use when actually bringing even a limited server online, is critical. Cache disk can remain read/write (so normally containers and VMs will work - if needed images are in cache as is the default) but no mover will run as the actual array remains read-only. 2) Make it possible through the GUI to actually switch a disk missing with another, trusting it has the same content (bit for bit) as the missing one. For cases of replaced controllers, that normally change the disk identification. Should be an "expert" option, with all the "are you sure you know what you are doing" in place. In the thread above seems there is a workaround to this already?
  7. So to bring this to my case. I have disk 5 and disk 9 broken. I replace disk 5 controller, so although it is the old disk 5, it shows as a new disk unknown to the array. I make new array config, assign the new disk 5 as disk 5 again, check "parity is already valid" and "maintenance mode" and see if disk 9 is emulated ok. If it does, it means my new disk 5 is "ok" for UNRAID to be used normally and can be used to actually rebuild disk 9 (when I replace disk 9 with a fresh one). Right?
  8. 1) I understand. So this involves ignoring the disks for now (so go to #3 solution) and if I manage to "fix" the disks and comments just see them with UD and move the data manually back to array, right? (and then possibly add the disk back in the array if it is fixed) I undertstood correctly? 1b) I used to know this, but don't remember... if I manage to fix the disk later, but my array is already re-synced without the missing data in the meantime, can I add the "new" disk with its existing data? (i.e. not format it when I extend the array and instead "add" it's contents and re-sync parity again) 2) More details about the manual way that I can make UNRAID treat a different ID disk as a previous ID disk of the array? (if I know the platters contents are exactly as they were) 3) Thanks.
  9. So I have an array with 11 disks + parity + cache. I managed to kill (I'll spare you the details, but was my fault), two disks. The problem 99.99% is the controller of the two disks (the power supply input probably), happened before the disks actually even fully powered on (heads probably parked) - so the data are 99.999% intact. I am first of all looking into repairing the actual controllers (at least one, because then the parity will help me with the rest), but I am looking at worse case scenario (as in one of the disks I do have data I cannot find again), so I would like to know these: 1) Is it possibly to somehow bring online the server in some "maintenance read-only" mode with the two disks missing and access my remaining data (and possibly bring up containers and other services that don't rely on the array)? And not mess the parity (this is why I say read-only) in case I do bring the missing disks online later. 2) If the controller is replaced on one of the disks, I guess the disk will have different ID, yet it is potentially still the same disk and parity should stay VALID (to recover the other disk). Is it possible to tell UNRAID to actually treat that disk (with different ID) as that same disk that is missing? I am guessing there is no such mode of operation, but I would very much love if I could do that. 3) If I don't recover my disks and data, get two new disks, the proper way to bring the array online with the remaining data, is in the FAQ I guess? Thank you.
  10. let me see... ...yes this works
  11. I might need help. Can I PM you later?
  12. Actually I still don't have this. USB Manager says 2022.05.20, but also doesn't find an update for it. (plugin updates work fine in my server - I just did a couple 5 minutes ago) Maybe some version red flag? I am on 6.10.3.
  13. I guess not released yet. "Any minute now"?
  14. NLS

    USB button?

    erm... I should have looked that much
  15. Is it possible to hide the menu item "USB" that is very intrusive between MAIN and SHARES and instead put it in settings or tools? (maybe make this an optional setting?)
  16. OK this may sound weird, but I got a "USB" button on the menu bar, between main and shares. I don't remember if I did something to get it (if it is part of some plugin)... Can someone help me?
  17. Any idea why extracting (at least 7z) is SUPER slow with this docker? (at least in my setup) And by slow I mean slow even for UNRAID. It is way faster to open the 7z (that resides on the array) over network on my Windows PC and extract it to the USB disk I want, than opening the same 7z in krusader "locally" and extracting to same USB disk mounted directly (with unassigned devices plugin) to my server. Any ideas?
  18. Is anybody using Geyser with Paper on Crafty 4?
  19. Well it all depends. What made me move to a folder structure? 1) Ease of accessing the contents. No worries about the size that I need to manage manually, which works two ways: 2) I don't have to worry that the image is small for the wealth of my containers and their "residue". 3) I don't feel I have limited the ability of my cache to expand as much as possible and have dead space in the docker image. 4) Could be (debatable) faster to access, as there is one less filesystem "level" and one less level of fragmentation, overhead etc. Is it something magic, that everybody should do? No.
  20. I have issue with this. And while half the issue is probably a crafty-controller issue (GUI just dies), the other half of the issue is possibly issue of this container. If I try to restart the container, it throws a bunch of attempts to chown some files and then fails to start the server because it believes there is an instance already working. (the one that the GUI died so it couldn't die gracefully) So I have to try and find any processes related to it and kill them manually. Totally sucks. EDIT: Actually the problem is the session.lock file remaining. Needs manual delete. Probably the stop (or restart) container process, should look for this file to delete after some timeout?
  21. NLS

    What WAS my bottleneck?

    You are right, I check parity once per month, after all.
  22. NLS

    What WAS my bottleneck?

    But I don't need support. Are you sure you read what I wrote?
  23. So my server was roughly this: - i7 4771 - 24GB DDR3 RAM - 2x SAS 2008 controllers that had 12 SATA disks (11 data + parity) - SSD cache on on-board SATA And I changed it to: - R5 5600G - 32GB DDR4 RAM - Same 2x SAS 2008 controllers (that now hold only the 11 data disks) - Parity on-board SATA (I don't know why it came to me to do it like that... thinking maybe the SAS controllers to be in READ mode and the onboard in WRITE? Used to be an issue back in IDE days) - Cache on M.2 NVMe Thing is, with the old system parity was verified (every 15 days or something) with less than 60MB/s! (Which I remember distinctly being 2 or 3 times that when my disks were much less)... New system seems to check parity roughly DOUBLE the speed of the old system (>100MB/s) - although I will know exactly when it finishes. ...what was the main bottleneck of the old system? CPU? The change of bus of parity disk? Faster RAM? Possibly a bit faster/newer PCIe protocol for the SAS2008 cards? I know the easy answer is "all of these", but really someone maybe that knows the inner mechanics of the system... what affected it more?
  24. Question is, is the system made to look for a file name "keyfile" in /root? EDIT: Yes. Thanks.