-
Posts
1245 -
Joined
-
Last visited
-
Days Won
4
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by ken-ji
-
I'd like to be able to help (M-ITX user), but as I've seen from other people's comments - some of my components are rather pricey But I went with this route cheap MINI-ITX board with dual NIC (Broadcom - so stable enough) cheap PSU (was to only power one "green" drive (cache), the drive controller, and fans - nothing else) - but I replaced this with a left over Silverstone 450w Bronze PSU regular RAM - 16GB DDR small case - Cooler Master Elite 110 expensive external controller: LSI 9206-16e (future proofing - single PCIe x8 card for 16 external SAS devices) expensive external drive case: Areca 3036 (8 bay 6gbps SAS/SATA + Expander to allow a number of enclosures daisy-chained later on) Drives are a mix of Seagate 8TB Archives, and WD 4TB Reds and a WD 4TB green for cache.
-
Creating SSH User and restrict to a single user share
ken-ji replied to mcleanap's topic in Plugin System
unRAID users are actually service users - not real users by default. Try this, add this line to the /boot/config/ssh/sshd_config AuthorizedKeysFile /etc/ssh/%u.pubkeys place the public key inside /boot/config/ssh/sshd_config/root.pubkeys Easiest to just restart unRAID after this Then you should be able to ssh in as root (without passwords using the matching private key) However, this means all files will be created as root:root unless your backup mechanism allows you to specify the owner of the uploaded files -
How to set 802.3ad 'xmit_hash_policy' (Transmit Hash Policy)?
ken-ji replied to NKnusperer's topic in General Support
You'll need to add it to the /boot/config/go file # enable layer2+3 in bonding 802.3ad echo layer2+3 > /sys/class/net/bond0/bonding/xmit_hash_policy -
Major Intel Security Flaw Is More Serious Than First Thought
ken-ji replied to Frank1940's topic in General Support
Technically, you can using microcode updates, which we aren't doing in unRaid BTW. -
How to you add massive amounts of USB drives to the Array?
ken-ji replied to miogpsrocks's topic in General Support
Just wondering, Is it still a shared bus if you have multiple USB controllers, say, one per port group? I realize there is a CPU bottleneck with USB storage... -
Does anyone use SFF-8087 for external too?
ken-ji replied to miogpsrocks's topic in Storage Devices and Controllers
Well, I'm using SFF-8644 to SFF-8088 cables (HBA->Enclosure) AFAIK regarding the OP, its just a matter how much you value the data travelling down the wire. SFF-8087 is unshielded and meant for internal case RF noise, which is minimized by the fact that the parallel cables tend to be short runs. SFF-8088 is shielded and meant for external RF noise, shielding takes care of the issue. Additionally, SFF-8087 doesn't exactly lock in place and could get disconnected with a slight tug. non trivial force is necessary to unplug SFF-8088 (and the 8644) If your two cases are side by side, I would use SFF-8087 cables only if they are short enough (i believe 1m max for SATA devices on those cables) and they are impossible be moved apart ie rack mounted. Otherwise, SFF-8088 is for the best. -
That card needs a x8 PCI-E slot. Your motherboard has only one that will work - the x16 slot (usually occupied used for Graphics cards) Its a relatively old controller (circa 2007), and is more likely to be well supported by the Linux kernel - probably under the mptsas driver. Yes most (if not all) of LSI's HBAs are supported under Linux
-
AFAIK, standard bonding (LACP) only helps if you have lots of clients contending to access the server, since the bonding does some hashing with the client MAC address to pick which link to use. So in this case a single client will max out its transfer at 128GB/s (including protocol overhead). LACP is mainly for high availability and scalability with many clients - not high capacity for single/few clients CISCO and probably the big names have a proprietary bonding called ethernet trunking (?) which from the HW point of view (of the switch) it aggregates the interfaces together and treats them as a single link. AFAIK, this only works between same vendor switches.
-
Well if there was a vulnerability in NGINX, it could be a point of attack. That said, you only expose the stuff you want to be external facing if you must, and use VPNs to access anything else. My config by the way is internal facing. A separate config (with SSL) and separate DNS names is used to "isolate" public facing services.
-
Drop the listen lines, and just use server_name ones server { listen 80 default; root /var/www/default; } server { listen 80; server_name mediastore; server_name 192.168.2.5; location / { proxy_pass http://192.168.2.5:8080; } location /transmission { satisfy any; allow 192.168.2.0/24; auth_basic "Transmission Remote Web Client"; auth_basic_user_file /config/transmission.passwd; proxy_pass http://192.168.2.5:9091; } location /kibana { proxy_pass http://192.168.2.5:9000; rewrite ^/kibana$ /kibana/ permanent; rewrite ^/kibana/(.*) /$1 break; access_log off; } } Here's mine, where I proxy unRAID WebUI (on 8080), Transmission on 9091, and Kibana on 9000. my unRAID is on 192.168.2.5 and note that I don't listen on a specific IP, just setup valid server_names to use and an empty default (kinda jail)
-
If 192.168.1.82 is the IP of you're unRAID server, you are doing it wrong. a docker container gets its own IP in a totally different subnet and its dynamic depending on what oder the container gets started up You should make the nginx configuration be the default_server (ie any interface IP), that way it don't care about this detail.
-
[6.3.0+] How to setup Dockers without sharing unRAID IP address
ken-ji replied to ken-ji's topic in General Support
Oops. just noticed now the wrong capitalization in the post. Corrected. -
Just to be clear as I seem to have been misunderstood - you normally don't place a single VLAN as both tagged and untagged at the same time on an interface. That's just weird...
-
We are using slackware as a base for a lives in RAM OS. We can patch just about anything, excepting emhttpd and the kernel all @limetech has to do is push out a package ie samba-5.0.0-x86_64-6.3.3_limetech.txz and have it installed in the /boot/extra (We'll need web ui support for this) you could turn the array off and install the package, the start the array. BAM! fully patched and vulnerability fixed, while limetech continues getting the patch rolled into the next release since it was installed in /boot/extra, the patch takes over every restart. the limetech could insert /boot/extra cleanup code in the next release so once the new version started it would nuke or disable the old patches I don't see why running on a ramdisk precludes patching, when plugins can do it, the core system should be able to.
-
I think your problem is somewhat due to the fact that the native(default) VLAN is also present as a subinterface (br0.10) VLAN10. This is a config that never made sense to me. And the Cisco Network Engineers I've spoken with says the native vlan for a link should never be part of the tagged VLANs on that link (unless you are preparing to change the native VLAN between switches with minimal disruption) Also, your CentOS VM can do tagged packets - but it should be on br0 not br0.10 The subinterfaces .X assume that anything going through it from the host (unRAID) will be tagged with VLAN id X. So if you turn on tagging, and are on br0.10, your packets get double tagged. And its anybody's guess where the packet goes from there.
-
I'll take a stab at this. Please elaborate, you have the VLANs in use accdg to you unRAID config: untagged VLAN (unRAID uses this too) tagged VLAN 10 tagged VLAN 20 tagged VLAN 30 tagged VLAN 100 Which ones are used by your switch? What bridge (br0, br0.5, br0,10, br0.20, br0.30, br0.100, vibr0) is being used by the CentOS VM? what else is tied to the bridges? (# brctl show) Posting the diagnostics should help a lot too.
-
This has nothing to do with unRAID itself. Check the docker support thread (if any). Otherwise, a generic answer is to mount something, say /mnt/user/appdata/docker/ssh, int the docker as .ssh directory of the user that will be doing the ssh.
-
I hope you meant Windows PC 2 10GBe - 192.168.3.2 Because I don't see how it would work unless you have a router on the 10Gbp network. And even then you won;t be able to do 10Gbp on both Windows machines at the same time as they would bottleneck on the single 10Gbp on the unRAID
-
@Squid I'll add that as part two.
-
Why can't I delete a file (without permissions from root/nobody/Unix user/999/etc)? My VM/Docker created some files but I can't access them from Windows? First a primer: Unix filesystem permissions/ACLs (access control lists) in a nutshell There are always 3 permission groups (owner, group, other) owner - if you own the file, these permissions apply group - if you are a member of the group, these permissions apply other - if you are not the owner or member of the group, these permissions apply Permissions are cumulative, there is no "deny" permission, so if one group grants permission, permission is granted. You can easily check the permissions of a file from the shell with root@Tower:~# # ls -l /mnt/user0/slackware/ total 92 -rwxr-xr-x 1 nobody users 410 Aug 10 2016 getall.sh* -rw-r--r-- 1 nobody users 5336 Oct 29 15:20 mirror-slackware-current.conf -rwxr-xr-x 1 nobody users 39870 Nov 30 2013 mirror-slackware-current.sh* -rw-r--r-- 1 nobody users 5397 Oct 29 15:20 mirror-slackware.conf lrwxrwxrwx 1 root root 27 Jan 28 2016 mirror-slackware.sh -> mirror-slackware-current.sh* drwxrws--- 1 root root 56 Jan 16 2014 multilib/ -rwxr-xr-x 1 nobody users 7165 May 20 2010 rsync_slackware_patches.sh* drwxrws--- 1 root root 4096 Jun 11 2015 sbopkgs/ lrwxrwxrwx 1 root root 16 Jan 28 2016 slackware64 -> slackware64-14.1/ drwxrws--- 1 nobody users 4096 May 28 2016 slackware64-14.1/ drwxr-xr-x 1 root root 4096 Dec 5 02:00 slackware64-14.2/ drwxrws--- 1 root root 4096 Aug 11 2016 slackware64-14.2-iso/ drwxr-xr-x 1 nobody users 4096 Dec 5 02:01 slackware64-current/ drwxrws--- 1 nobody users 4096 May 1 2015 slackwarearm-14.1/ The permissions are the displayed with the 10 character string at the start of the line [l][rwx][rwx][rwx] the first character just tells us the type of the file/directory/link we are working with the first triad are the owner permissions, these are the permissions that apply to the owner of the file/directory/etc the 2nd triad are the group permissions, these are the permissions that apply to the members of the group of the file/directory/etc the last triad are the other/else permissions, these are the permissions that apply to users who are not the owner nor members of the group of the file/directory/etc For files: To read a file: read permission is needed. r-- To write a file: write permission is needed. -w- To execute a file (as a script, or binary): execute is needed. --x For directories: To list the contents a directory: read and execute is needed. r-x Weird things happen otherwise To create/delete files in a directory: write is needed on both the file and the directory. -w- Example: So for a file /mnt/user/share/a/b drwxrwxr-x 1 nobody users 2 Mar 15 11:57 a/ -rw-rw-rw- 1 nobody users 2 Mar 15 11:57 a/b Other than root, nobody or members of users. the file b would be impossible to delete, since the write permission to the directory is missing. The file however, can be overwritten by anybody. Now, Windows access to the files is over SMB SMB has two modes of access to the file. samba is the app providing the access. Public/Guest access - (unRAID default) in this mode, all access is allowed. There are no passwords needed. Files and directories are created with the nobody user. Permissions are typically set to rwxrwxrwx which grant anybody read and write access Private/Secure access - in this mode, users need to be defined and passwords assigned. Files and directories are owned by the user who created them. But when a share is created, unRAID assigns it to nobody with full read, write, execute for all (owner, group, and others).(ie drwxrwxrwx) The problem begins when there is a VM, docker creating files. Lets say the VM is using the user backup. Lets say user alice is trying to delete the old backups from her Windows PC. Even if the shares are public, she would hit the error about requiring permissions from backup to delete the files. Why? Because samba will be using the user nobody to delete the files made by the backup user, and typically the file permissions won't allow it. If the shares are private/secure, it can still fail because alice user is not the same as the backup user, and thus the permission problem exists again. (There are cases where this is not true, but that's a bit outside the scope of the FAQ) How do we correct the issue The easiest way to correct the issue is to run Tools|New Permissions which pave over all of the shares and disks to have files with rwxrwxrwx permission and ownership by nobody. But now we don't want that since our VMs and dockers are, in effect, separate OS with their own users, which may or maynot coincide with the new attributes. So, we login to the terminal (over SSH or console) and from the terminal, we run: root@Tower:~$ chmod 777 -Rv /mnt/user/<share1> /mnt/user/<share2> ... This will cause all the permissions of the affected shares to be set to rwxrwxrwx which should normally fix the issues. In case you have more complex settings or requirements, feel free to discuss them in the forums as this requires case to case settings that might be applicable to your specific scenario. Initial stuff, will expand as needed
-
Well you can't use the 0th IP, and using two IPs on the same subnet under unraid can cause weird things... like packets going in and out of the wrong ports. so Ideally, your unRAID 10GBe ports are bonded (probably LACP or balance-alb) and assigned a single IP. the Windows clients then use this single IP and it should be able to utilize the 20GB serving the two clients... If bonding doesn't quite work, you can always fallback to a different subnet per IP kinda like: unraid 1: 192.168.0.1/255.255.255.127 unraid 2: 192.168.0.128/255.255.255.127 win 1: 192.168.0.2/255.255.255.127 win 2: 192.168.0.129/255.255.255.127
-
How Do I Pass Through A USB Device To A Docker Container?
ken-ji replied to Living Legend's topic in Docker Engine
New udev rules can be loaded in run time with udevadm trigger, so fixing your device to a specific devicename could work.