Jump to content

Kamikazejs

Members
  • Posts

    30
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kamikazejs's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I know what you're trying to do but I don't know if you *CAN* do it without manually editing the smb-shares.conf. One workaround would be to just change the user permissions on the parent directory so that user A has access, but user B doesn't, then give both users access to the /homemovies subdirectory. I usually do this with adding a unix group to /etc/groups, making certain users members of different groups, and managing ownership and access using the good old User/Group/Other permissions. Unraid doesn't preserve groups, which is a bummer, but I use the Community Applications CA user scripts plugin and have it run a custom script that I create (the button at the bottom there explains this really well). *THIS ONLY NEEDS TO BE RUN ONCE at Array Startup*. Perhaps Unraid isn't '"Built" for this but it's an incredibly basic function/feature of any Samba server, one which I've used since 1996. And now I feel old, thank you. =-D BTW, the official response will probably be to just split that out into two different shares with different user access. That doesn't work for most of what I want to do (I already have too many), but it's easy to do.
  2. Respectfully this has been my BIGGEST annoyance with Unraid for the last 4 years I've used it, along with proper NFS export support to control permissions by hostname. My workaround back then was a dumb script ("samurai" below is the hostname): Once this runs and any negative caches on clients time out, all of the security works group. Users in group A can access folders owned by that group, users in group B can access folders in another group. root@samurai:/mnt/user/scripts# cat fixunraid.sh showmount -e echo "Fixing hosts, groups, and NFS exports" cat hosts.append >> /etc/hosts cat group.append >> /etc/group cp /etc/exports /etc/exports.BAK cp exports.samurai /etc/exports sleep 10 exportfs -a sleep 5 showmount -e (I previously also appended passwd entries but sometime in the past few years Unraid actually remembered changes to /etc/passwd, such as the primary GID.) Perhaps this sounds "too complicated" to manage automatically but this is the simplest, most BASIC UNIX-level security and has been implemented through simple GUIs (that work across local, NFS, and Samba access) hundreds of times. Just for HOME use, this lets me keep my kids from accessing or deleting files they shouldn't. Implementing Groups is essential for any NAS and I'm shocked at some of the suggestions in this thread to create tons of extra shares or duplicate files.
  3. I have an uptime of over 7 days now on 6.6.2, which is the longest uptime I've had on unraid in months. This is with NFS use. Interestingly, I see a lot of rpcbind messages spamming my syslog with NFS lock requests - I'm assuming this is part of the enhanced information that was enabled. I'm a fan of this software and linux in general, but I feel like these problems could have been avoided if the new kernel was tested fully for a few weeks at least with multiple usage scenarios (including NFS load testing) on the test branch before it was moved over to the stable branch. Stable shouldn't be beta-testing brand new kernel versions with unknown bugs. I would and have criticized Redhat, Suse, and Microsoft for doing the same at times.
  4. Just crashed again on me with same results - updating to 6.6.2 and will upload diags when it happens again. FYI, what seems like a surefire way of killing the nfsserver and then the shfs (and making everything remount on /mnt/user0) is mounting one of the NFS exports as /home for a linux VM, especially with lots of little NFS I/O calls and NFS locks for Firefox, etc. It gradually slows down more and more, until finally nfsd on unraid just croaks.
  5. I'm having the same issues. I'm exporting most of my shares via CIFS and NFS. I have an Opensuse 42.3 VM mounting most of the exports, including for /home. (Lots of the following in syslog) Oct 10 20:53:23 samurai rpcbind[3666]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:54:24 samurai rpcbind[4265]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:55:29 samurai rpcbind[4894]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:56:30 samurai rpcbind[5473]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:57:32 samurai rpcbind[6093]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:58:35 samurai rpcbind[6816]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 20:59:37 samurai rpcbind[7511]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 21:00:49 samurai rpcbind[8309]: connect from 192.168.1.19 to getport/addr(nlockmgr) Oct 10 21:01:52 samurai rpcbind[8975]: connect from 192.168.1.19 to getport/addr(nlockmgr) Followed by: Oct 10 21:01:54 samurai kernel: ------------[ cut here ]------------ Oct 10 21:01:54 samurai kernel: nfsd: non-standard errno: -103 Oct 10 21:01:54 samurai kernel: WARNING: CPU: 4 PID: 4716 at fs/nfsd/nfsproc.c:817 nfserrno+0x44/0x4a [nfsd] Oct 10 21:01:54 samurai kernel: Modules linked in: xt_nat veth xt_CHECKSUM iptable_mangle ipt_REJECT ebtable_filter ebtables ip6table_filter ip6_tables vhost_net tun vhost tap ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat nfsd lockd grace sunrpc md_mod bonding sr_mod cdrom x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel mpt3sas aes_x86_64 crypto_simd cryptd glue_helper raid_class scsi_transport_sas intel_cstate intel_uncore ahci libahci i2c_i801 intel_rapl_perf i2c_core e1000e video mxm_wmi wmi backlight pcc_cpufreq thermal acpi_pad button fan Oct 10 21:01:54 samurai kernel: CPU: 4 PID: 4716 Comm: nfsd Not tainted 4.18.10-unRAID #2 Oct 10 21:01:54 samurai kernel: Hardware name: Gigabyte Technology Co., Ltd. Z97X-UD3H/Z97X-UD3H-CF, BIOS F10b 03/03/2016 Oct 10 21:01:54 samurai kernel: RIP: 0010:nfserrno+0x44/0x4a [nfsd] Oct 10 21:01:54 samurai kernel: Code: c0 48 83 f8 22 75 e2 80 3d b3 06 01 00 00 bb 00 00 00 05 75 17 89 fe 48 c7 c7 3b 7a 25 a0 c6 05 9c 06 01 00 01 e8 3b 0d e0 e0 <0f> 0b 89 d8 5b c3 48 83 ec 18 31 c9 ba ff 07 00 00 65 48 8b 04 25 Oct 10 21:01:54 samurai kernel: RSP: 0018:ffffc90001d43dc0 EFLAGS: 00010282 Oct 10 21:01:54 samurai kernel: RAX: 0000000000000000 RBX: 0000000005000000 RCX: 0000000000000007 Oct 10 21:01:54 samurai kernel: RDX: 0000000000000000 RSI: ffff88042fb16470 RDI: ffff88042fb16470 Oct 10 21:01:54 samurai kernel: RBP: ffffc90001d43e10 R08: 0000000000000003 R09: ffffffff8220db00 Oct 10 21:01:54 samurai kernel: R10: 0000000000000458 R11: 00000000000160c8 R12: ffff88041a9b2408 Oct 10 21:01:54 samurai kernel: R13: 0000000000008000 R14: ffff88041a9b2568 R15: 0000000000000008 Oct 10 21:01:54 samurai kernel: FS: 0000000000000000(0000) GS:ffff88042fb00000(0000) knlGS:0000000000000000 Oct 10 21:01:54 samurai kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 10 21:01:54 samurai kernel: CR2: 000014bf59f56020 CR3: 0000000001e0a005 CR4: 00000000001626e0 Oct 10 21:01:54 samurai kernel: Call Trace: Oct 10 21:01:54 samurai kernel: nfsd_open+0x15e/0x17c [nfsd] Oct 10 21:01:54 samurai kernel: nfsd_read+0x45/0xec [nfsd] Oct 10 21:01:54 samurai kernel: nfsd3_proc_read+0x95/0xda [nfsd] Oct 10 21:01:54 samurai kernel: nfsd_dispatch+0xb4/0x169 [nfsd] Oct 10 21:01:54 samurai kernel: svc_process+0x4b5/0x666 [sunrpc] Oct 10 21:01:54 samurai kernel: ? nfsd_destroy+0x48/0x48 [nfsd] Oct 10 21:01:54 samurai kernel: nfsd+0xeb/0x142 [nfsd] Oct 10 21:01:54 samurai kernel: kthread+0x10b/0x113 Oct 10 21:01:54 samurai kernel: ? kthread_flush_work_fn+0x9/0x9 Oct 10 21:01:54 samurai kernel: ret_from_fork+0x35/0x40 Oct 10 21:01:54 samurai kernel: ---[ end trace 82cc1d618070c378 ]--- Oct 10 21:02:06 samurai rpc.mountd[4719]: Cannot export /mnt/user/Projects, possibly unsupported filesystem or fsid= required I've attached diags. Everything is remounted on /mnt/user0, but of course my NFS exports and CIFS shares aren't working. I'm about to reboot the server. What's the consensus solution? Downgrade to 6.5.3 or what? I upgraded to 6.6.1 because I was getting whole-server lockups & spontaneous reboots every couple of days all of a sudden, whereas 6.6.1 *seemed* more stable. unraid.diags.Oct11_2018.7z samurai-diagnostics-20181011-1547.zip
  6. I guess I've read in too many places about potential data loss with onboard SATA controllers, and had issues myself on other hardware with onboard SATA ports wreaking havoc and causing massive data corruption (which, thankfully, btrfs in raid1 was able to correct) that I just would rather trust a more "enterprise" grade device at this point. The SAS breakout cables are really nice too... that sure cleans things up in that box. On a side node, I'm a big fan of BTRFS in raid 1. I would really, really like to see btrfs in raid1 for metadata & data made available in unraid for volumes with critical data in the future, with the parity an additional layer of protection on top of that. That might sound paranoid to some but I'm a data storage guy for a living.. =-)
  7. Yeah, you're probably right that there's a little more speed to be gained but this is Good Enough for now until I upgrade that server at the end of the year. The drives are still plugged into the H310 HBA. I don't trust the onboard SATA controllers for drives this large and new.
  8. Unfortunately the IOMMU groupings on this motherboard are really weird. I literally tried every combination and setting (spending an entire Saturday), and this bottom slot was the only place that I could keep it separate from an Nvidia GPU in one of the other PCIE slots and be able to virtualize that GPU to a VM for my wife to use. So right now I have that GPU in the top x16 slot and the H310 in the bottom PCIEX4 slot, which seems to work. PCIE ACS override did NOT work - it would split out the IOMMU groups more logically but that Nvidia GPU was never recognized for passthrough.
  9. ....AAAAAND that was in fact the problem. I dug around and found a BIOS option on my Gigabyte Z97X-UD3H mobo to turn the PCIE bandwidth of that bottom slot where the SAS HBA is plugged in from "Auto" to "x4" and I'm now seeing 725MBps reads off my data drives and 145MBps writes onto my parity drive. It was useful to write that out though- I was pretty bummed with the overall disk performance until now. It turns out that a single PCIE2 lane doesn't support 6 drives very well!
  10. Hello, I'm having a similar issue. Parity rebuild always runs at about 40-41MB/s. The parity drive is a brand new WD Red 8TB, other drives are 8TB WD Whites and a pair of Seagate 4TBs, all plugged into a Dell Perc H310 SAS HBA reflashed with LSI 9211-8i firmware using SAS-to-Sata breakout cables. I'm wondering if the issue is that my SAS HBA is only initializing with one PCIE lane instead of 4? My total bandwidth that I see on it (~205MB/s) might seem to indicate that. I have no docker apps or VMs running right now, and only plugins are community applications, nerd tools, unnassigned devs and user scripts. Not currently transferring any files, either. CPUs are basically doing nothing. I see these same speeds consistently for days at a time during the parity rebuild. I would expect the parity rebuild to operate north of 100MBps on this hardware. 05:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Subsystem: Dell 6Gbps SAS HBA Adapter Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: I/O ports at d000 Region 1: Memory at f6940000 (64-bit, non-prefetchable) Region 3: Memory at f6900000 (64-bit, non-prefetchable) Expansion ROM at f6800000 [disabled] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend- LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported AtomicOpsCap: 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled AtomicOpsCtl: ReqEn- LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [d0] Vital Product Data pcilib: sysfs_read_vpd: read failed: Input/output error Not readable Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [c0] MSI-X: Enable+ Count=15 Masked- Vector table: BAR=1 offset=0000e000 PBA: BAR=1 offset=0000f800 Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 Capabilities: [138 v1] Power Budgeting <?> Kernel driver in use: mpt3sas Kernel modules: mpt3sas samurai-diagnostics-20180303-1225.zip
  11. I would try other USB ports on the motherboard and other USB drives. I eventually had good luck with this guy: https://www.amazon.com/Samsung-Flash-Drive-MUF-16BA-AM/dp/B013CCPFAW Also used this cable: https://www.amazon.com/gp/product/B000IV6S9S/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 Also see limetech's suggestions earlier in this thread for trying different USB boot modes (Auto, USB-Floppy, USB-HDD, uefi or legacy, etc.)
  12. Hmmm, sorry, I may have been thinking about on my switch. I'm using a cheap 16-port Zyxel managed Gb switch that I can setup LAGs, LACP, vlans, etc., on. Although you can force the mode to Gigabit in linux, I don't see an option in the unraid gui at the moment. Sorry, I would have responded sooner but my Perc H310 HBA suddenly went bellyup and I spent a panicked hour tracking that problem down. I ended up having to move it to a different slot AND pull out my old ASMedia PCI (not PCIE) 4-port SATA card. I thought for sure that my crossflashing it to the LSI SAS2008 firmware had broken it.
  13. I'd try forcing it to 1000Mbit and see if it works. Same here! Interesting. My old motherboard doesn't have any USB3 ports on it. Interesting that we were both having problems on b rand new flash drives though - what were you using?
  14. Indeed, this was the case. Using the Samsung USB drive I linked to, I'm able to see /dev/sda1 mounted as /boot and everything comes up fine now.
  15. Send me an address and I'll have one of those Sandisk Cruzer Fit's that I linked to shipped to you. kamikazejs AT gmail dot com. I'd much rather use that for a server boot drive.
×
×
  • Create New...