Jump to content

fluisterben

Members
  • Content Count

    95
  • Joined

  • Last visited

Community Reputation

5 Neutral

About fluisterben

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. How does one 're-enable the disk' other than what TS wrote? I also followed the guide to re-enable the disc with stopping the array, removing the disc from the array, starting array, stopping array, enabling the disc for the array and making it parity-sync rebuild. You seem to know of a miracle other kind of 're-enable' option that is not documented anywhere.. For most that will be too late then, since they already rebuilt it.
  2. They are just a 'Share', so, in my case /mnt/user/nxt, which does not show any settings for permissions relating to access from a VM. It shows the Export to Yes, 'Secure' for SMB, which this isn't, so: Should I change permissions from shell then? I have really no idea about the inner workings of that passthrough share by unraid, it's not documented anywhere.
  3. OK, pulling this thread back up, because I have issues again; I run a debian VM with (among others) nextcloud and nginx on it. It has this part in its xml; <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/nxt'/> <target dir='nxt'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> Now, the /nxt is working, from within the VM, but the permissions under it seem to be problematic. Even though I can set them just fine from within the VM, under /nxt, nextcloud still complains about "Home storage for user x not being writable" and it has issues within nextcloud sharing stuff. I've tried to find out why, but logs seem to be unclear as to why. Any ideas?
  4. There are no names to resolve when you proxy outside of a docker container using nginx. Not much of what I read here makes any sense; The name resolving is done outside of the container here, for nginx, with dyndns. NGINX listens to that name and serves its vhost, then proxies to/from the docker container on a specified port, which isn't 80 or 443, because those are already used in the network. Port numbers are not being 'resolved' by DNS.
  5. You wrote; "You should use 80 instead of the port you mapped to the container as it uses dockers internal network to resolve names." which is just incorrect. First, docker does not by definition 'uses an internal network'. You set it to do so. Second, names are resolved using DNS, which can point anywhere, regardless of where in a network you are.
  6. Name resolving has nothing to do with either network or ports of a docker container.
  7. You need to run a docker main config like this; network create macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=br0 mymcvlannet and then, something like this in the run line; docker run --net mymcvlannet --ip 192.168.1.111 That way your container should have its served ports blank on that .111 LAN IP, so you don't have to run strange proxy-setups. Only problem is, I have no idea where to put this in unRAID GUI.
  8. That will not work, because the docker is still part of the 0.0.0.0 network of unraid. There's no new IP for that docker instance. I'd prefer it if it was that way, but none of the dockers for unRAID do this.
  9. So, basically you're saying; Remove the drive, put a replacement in, let it do a parity rebuild. Done. If that is the procedure, why isn't unraid just telling me while it happens? The way things are portrayed, I'm not sure if data in the array is in tact or complete when I just kick that drive out. Here's my advice to unRAID dev; I get warnings that a drive is getting bad, more failures, more SMART errors, slowly deteriorating. I want to replace it. First thing a user wants to do is have unraid READ from that dying disk what is still in tact (and readable), move it out, and then discard blocks. Or at the very least be really assured the in-tact versions of what may be going bitrot on that drive exist somewhere outside of that drive so user is not losing data.
  10. OK, ssds added to the cache pool and ran ~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v Dumping filters: flags 0x7, state 0x0, force is off DATA (flags 0x100): converting, target=64, soft is off METADATA (flags 0x100): converting, target=64, soft is off SYSTEM (flags 0x100): converting, target=64, soft is off which I'll have to wait and see if it works, but it looks good thus far. ~# btrfs fi show Label: none uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693 Total devices 7 FS bytes used 937.73GiB devid 2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1 devid 3 size 894.25GiB used 894.25GiB path /dev/sdp1 devid 4 size 894.25GiB used 894.25GiB path /dev/sdn1 devid 6 size 953.87GiB used 781.50MiB path /dev/sdj1 devid 7 size 953.87GiB used 781.50MiB path /dev/sdl1 *** Some devices missing Label: none uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6 Total devices 1 FS bytes used 1.62GiB devid 1 size 20.00GiB used 5.02GiB path /dev/loop2 Label: none uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65 Total devices 1 FS bytes used 604.00KiB devid 1 size 1.00GiB used 398.38MiB path /dev/loop3 I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there?
  11. Why is it bad to wipe the new devices? There's nothing on them. So, here's what I've done so far; - I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10). - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. The array started, but I can't do anything regarding disks, because it says; "Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs. and under the Cache it says "Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed). Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not?
  12. Yes, I did try the Unbalance plugin, but it keeps telling me about permissions and errors, which simply aren't valid (I've thoroughly checked) and then it doesn't allow me to unbalance a drive out, so to say. Still, the array rewriting every sector of a new drive seems horribly overkill. I'm more for the way StableBit DrivePool does it, where it basically allows you to say which dirs need to have which amount of copies in the pool, and each drive's content is accessible separately. In fact, when I first started with unRAID, I thought it was more similar to CoveCube's DrivePool, turns out it isn't, it's just another RAID array, and frankly, even the name 'unRAID' isn't really appropriate. All it is, is a GUI for 2 raid arrays (the cache and the parity-controlled array..). There's what I think is really missing in unRAID; I get notices that a drive has a growing amount of bad sectors and errors, and then there's nothing that tells me how to save the files that are not corrupted yet on that drive, it just leaves me with "bad drive bad drive red alert!", honestly, that's just not the way to go. It should have a button by which to safely decommission the drive and safeguard its content. And then suddenly the GUI isn't friendly anymore, and we need to go to a shell and dd or ddrescue and such. It's such a linux-disease, pretending to offer a GUI and user friendly everything, and when push comes to shove we all need to be sysadmins and go shell-scripting again. Don't get me wrong, I like being on a shell with ssh, but it's not unRAID's intended use-case.
  13. This is really a missing feature! Knowing that the array has more than enough free storage space to entirely de-commission a drive from the array, it would be best to be able to invoke moving its data before taking out the bad drive. This also highly speeds up the data restoration when a new drive is put in, since there's less R/W to be done.
  14. OK, so I can rely on unRAID knowing which copies on the array of the files are in tact? Some will presumably be corrupt because this disk5 is quicky dying on me. I will take it out assuming the parity knows.
  15. In Covecube Stablebit DrivePool I can then decide to remove a drive, with these options (see attached). Is there something similarly easy in unRAID to kick a bad disk out? I tried using the Unbalancer plugin for that, but it gives me so many errors I can't even solve (they're probably not even correct, since they're not possible) that I'm not sure that is a thing to use for this. Sure, I can copy or move data from /mnt/disk5 to /mnt/cache or something on commandline, but that too seems not the way to go, since I'm not sure how unRAID then knows what happened..