fluisterben

Members
  • Posts

    102
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fluisterben's Achievements

Apprentice

Apprentice (3/14)

5

Reputation

  1. This is still an issue. My disks keep spinning up almost immediately after spinning them down, which, barely any even do when asked. This used to not be the case, with the exact same hardware and software config. Something is waking them up, and it costs us a ridiculous amount of power. In fact, the reason I noticed this issue, is because we wanted to know what changed in our power consumption. Turned out it was the unraid server.
  2. I agree, unbelievably complex. This procedure should at the very least be scripted/automatable, I think. It is in most other NAS systems, like Drobo, Synology, OMV, etc.
  3. ~# powertop --auto-tune modprobe cpufreq_stats failedLoaded 0 prior measurements RAPL device for cpu 0 RAPL device for cpu 0 Devfreq not enabled glob returned GLOB_ABORTED the port is sda the port is sdb the port is sdc the port is sdd the port is sde the port is sdf the port is sdg the port is sdh the port is sdi the port is sdj the port is sdk the port is sdl the port is sdm the port is sdn the port is sdo Leaving PowerTOP OK, I'm new to this. What do those failed and aborted mentions mean? I suppose I need to take my old monitor to the crawlspace and change BIOS settings for this machine, no?
  4. What exactly do you mean by this? You have posted 5 commandlines. Rather confusing. The sdparm command does not state anything about drives being spun down or not, or at least I don't see it..
  5. The winbindd error seems to be caused by Samba/smb, since I have at most 4 machines connecting using winbind, not over 200. https://bugzilla.samba.org/show_bug.cgi?id=3204
  6. Apr 2 13:19:03 unraid9 atd[6027]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6055]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6053]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6057]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6056]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6059]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6061]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6060]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6065]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 winbindd[9470]: [2021/04/02 13:19:03.085234, 0] ../../source3/winbindd/winbindd.c:1255(winbindd_listen_fde_handler) Apr 2 13:19:03 unraid9 winbindd[9470]: winbindd: Exceeding 200 client connections, no idle connection found Apr 2 13:19:03 unraid9 atd[6066]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6282]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6285]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 winbindd[9470]: [2021/04/02 13:19:03.085493, 0] ../../source3/winbindd/winbindd.c:1255(winbindd_listen_fde_handler) Apr 2 13:19:03 unraid9 winbindd[9470]: winbindd: Exceeding 200 client connections, no idle connection found Apr 2 13:19:03 unraid9 atd[6063]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6067]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6070]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6280]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6058]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6289]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6072]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6071]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6074]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6073]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6075]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6076]: Userid 0 not found - aborting job 3 (a00003019b50c7) OK, my /var/log/syslog gets filled up in minutes with these errors. I have no idea where they come from. I've already researched at as the source, and apparently there are many files spooled from that, but then the winbindd error is a mystery to me, and seems directly related. Anyone seen this before? It started after updating unraid to the latest stable version..
  7. How does one 're-enable the disk' other than what TS wrote? I also followed the guide to re-enable the disc with stopping the array, removing the disc from the array, starting array, stopping array, enabling the disc for the array and making it parity-sync rebuild. You seem to know of a miracle other kind of 're-enable' option that is not documented anywhere.. For most that will be too late then, since they already rebuilt it.
  8. They are just a 'Share', so, in my case /mnt/user/nxt, which does not show any settings for permissions relating to access from a VM. It shows the Export to Yes, 'Secure' for SMB, which this isn't, so: Should I change permissions from shell then? I have really no idea about the inner workings of that passthrough share by unraid, it's not documented anywhere.
  9. OK, pulling this thread back up, because I have issues again; I run a debian VM with (among others) nextcloud and nginx on it. It has this part in its xml; <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/nxt'/> <target dir='nxt'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> Now, the /nxt is working, from within the VM, but the permissions under it seem to be problematic. Even though I can set them just fine from within the VM, under /nxt, nextcloud still complains about "Home storage for user x not being writable" and it has issues within nextcloud sharing stuff. I've tried to find out why, but logs seem to be unclear as to why. Any ideas?
  10. There are no names to resolve when you proxy outside of a docker container using nginx. Not much of what I read here makes any sense; The name resolving is done outside of the container here, for nginx, with dyndns. NGINX listens to that name and serves its vhost, then proxies to/from the docker container on a specified port, which isn't 80 or 443, because those are already used in the network. Port numbers are not being 'resolved' by DNS.
  11. You wrote; "You should use 80 instead of the port you mapped to the container as it uses dockers internal network to resolve names." which is just incorrect. First, docker does not by definition 'uses an internal network'. You set it to do so. Second, names are resolved using DNS, which can point anywhere, regardless of where in a network you are.
  12. Name resolving has nothing to do with either network or ports of a docker container.
  13. You need to run a docker main config like this; network create macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=br0 mymcvlannet and then, something like this in the run line; docker run --net mymcvlannet --ip 192.168.1.111 That way your container should have its served ports blank on that .111 LAN IP, so you don't have to run strange proxy-setups. Only problem is, I have no idea where to put this in unRAID GUI.
  14. That will not work, because the docker is still part of the 0.0.0.0 network of unraid. There's no new IP for that docker instance. I'd prefer it if it was that way, but none of the dockers for unRAID do this.
  15. So, basically you're saying; Remove the drive, put a replacement in, let it do a parity rebuild. Done. If that is the procedure, why isn't unraid just telling me while it happens? The way things are portrayed, I'm not sure if data in the array is in tact or complete when I just kick that drive out. Here's my advice to unRAID dev; I get warnings that a drive is getting bad, more failures, more SMART errors, slowly deteriorating. I want to replace it. First thing a user wants to do is have unraid READ from that dying disk what is still in tact (and readable), move it out, and then discard blocks. Or at the very least be really assured the in-tact versions of what may be going bitrot on that drive exist somewhere outside of that drive so user is not losing data.