fluisterben

Members
  • Content Count

    97
  • Joined

  • Last visited

Community Reputation

5 Neutral

About fluisterben

  • Rank
    Newbie

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The winbindd error seems to be caused by Samba/smb, since I have at most 4 machines connecting using winbind, not over 200. https://bugzilla.samba.org/show_bug.cgi?id=3204
  2. Apr 2 13:19:03 unraid9 atd[6027]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6055]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6053]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6057]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6056]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6059]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[60
  3. How does one 're-enable the disk' other than what TS wrote? I also followed the guide to re-enable the disc with stopping the array, removing the disc from the array, starting array, stopping array, enabling the disc for the array and making it parity-sync rebuild. You seem to know of a miracle other kind of 're-enable' option that is not documented anywhere.. For most that will be too late then, since they already rebuilt it.
  4. They are just a 'Share', so, in my case /mnt/user/nxt, which does not show any settings for permissions relating to access from a VM. It shows the Export to Yes, 'Secure' for SMB, which this isn't, so: Should I change permissions from shell then? I have really no idea about the inner workings of that passthrough share by unraid, it's not documented anywhere.
  5. OK, pulling this thread back up, because I have issues again; I run a debian VM with (among others) nextcloud and nginx on it. It has this part in its xml; <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/nxt'/> <target dir='nxt'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> Now, the /nxt is working, from within the VM, but the permissions under it seem to be problematic. Even though I can set them just fine from wit
  6. There are no names to resolve when you proxy outside of a docker container using nginx. Not much of what I read here makes any sense; The name resolving is done outside of the container here, for nginx, with dyndns. NGINX listens to that name and serves its vhost, then proxies to/from the docker container on a specified port, which isn't 80 or 443, because those are already used in the network. Port numbers are not being 'resolved' by DNS.
  7. You wrote; "You should use 80 instead of the port you mapped to the container as it uses dockers internal network to resolve names." which is just incorrect. First, docker does not by definition 'uses an internal network'. You set it to do so. Second, names are resolved using DNS, which can point anywhere, regardless of where in a network you are.
  8. Name resolving has nothing to do with either network or ports of a docker container.
  9. You need to run a docker main config like this; network create macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=br0 mymcvlannet and then, something like this in the run line; docker run --net mymcvlannet --ip 192.168.1.111 That way your container should have its served ports blank on that .111 LAN IP, so you don't have to run strange proxy-setups. Only problem is, I have no idea where to put this in unRAID GUI.
  10. That will not work, because the docker is still part of the 0.0.0.0 network of unraid. There's no new IP for that docker instance. I'd prefer it if it was that way, but none of the dockers for unRAID do this.
  11. So, basically you're saying; Remove the drive, put a replacement in, let it do a parity rebuild. Done. If that is the procedure, why isn't unraid just telling me while it happens? The way things are portrayed, I'm not sure if data in the array is in tact or complete when I just kick that drive out. Here's my advice to unRAID dev; I get warnings that a drive is getting bad, more failures, more SMART errors, slowly deteriorating. I want to replace it. First thing a user wants to do is have unraid READ from that dying disk what is still in tact (and rea
  12. OK, ssds added to the cache pool and ran ~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v Dumping filters: flags 0x7, state 0x0, force is off DATA (flags 0x100): converting, target=64, soft is off METADATA (flags 0x100): converting, target=64, soft is off SYSTEM (flags 0x100): converting, target=64, soft is off which I'll have to wait and see if it works, but it looks good thus far. ~# btrfs fi show Label: none uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693 Total devices 7 FS bytes used 937.73GiB devid 2 size 894.25GiB used 893.54GiB pat
  13. Why is it bad to wipe the new devices? There's nothing on them. So, here's what I've done so far; - I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10). - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. The array started, but I can't do anything regarding disks, because it says; "Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs. and under the Cache it says "Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *ar
  14. Yes, I did try the Unbalance plugin, but it keeps telling me about permissions and errors, which simply aren't valid (I've thoroughly checked) and then it doesn't allow me to unbalance a drive out, so to say. Still, the array rewriting every sector of a new drive seems horribly overkill. I'm more for the way StableBit DrivePool does it, where it basically allows you to say which dirs need to have which amount of copies in the pool, and each drive's content is accessible separately. In fact, when I first started with unRAID, I thought it was more similar to CoveCube's DrivePool, tur
  15. This is really a missing feature! Knowing that the array has more than enough free storage space to entirely de-commission a drive from the array, it would be best to be able to invoke moving its data before taking out the bad drive. This also highly speeds up the data restoration when a new drive is put in, since there's less R/W to be done.