cyruspy

Members
  • Posts

    92
  • Joined

  • Last visited

Everything posted by cyruspy

  1. Hello!, I need to add a VLAN to my NAS and handle all the networking on that VLAN. The issue I see is that I cannot disable addressing on the bridge (only available options are "Static" or "Authomatic", but no "None" as is available on the VLAN). If I leave the bridge in Automatic, I get a link local address with a default route that overrides what comes by DHCP on the VLAN, hence I cannot route outside of the LAN. I set it to static via CLI which fixes routing but DNS client configuration is not applied (VLAN has DHCP configuration enabled). Any hints?
  2. Hello, Tried to do that, the start turned button turned grey when I unassigned the non-used (missing?) disk. Then I distroyed everything and started from scratch and now seems to be working properly. Thanks!
  3. Hi!, I just built a new unRAID machine with v6.8.3, and after adding two cache devices I don't see a BTRFS mirror: Is there something I'm missing? nas01-diagnostics-20201201-2051.zip
  4. Well, found an old backup of the initial setup. Used the initial config with less disks to figure out which drives are for parity, added the additional 4 disks that were included afterwards and market parity as OK. Filesystem mounted, and currently cleaning out encrypted files.
  5. Well, I got owned (somehow, I still don't know how). As I neglated to disable the default pendrive share, all the files related to the UNRAID installation are encrypted. Is there any known procedure to figure out disk roles in the disk set (volume configuration) in order to recover?, I recall that autodiscovery reading from the disks was not a feature.
  6. Hello!, Anybody has experience with P222 in JBOD mode?. I've read some references of that being supported in newer firmware versions. I need to deploy a remote Microserver Gen8 and it's the only supported addon card (using other HBAs will force fans to full speed). Option 1 (if JBOD possible): - plug disk cage to the P222 (might enable hotplug?) - mod the Microserver to connect 2xSSD disks to the onboard ports with SAS to SATA breakout cables Option 2 (if JBOD not supported) - keep cage connected with MB - mod the Microserver to connect 2xSSD disks to the P222 and setup a RAID1 volume. In any case, I would use the resulting SSD setup for cache. Second option would be ugliest since unRAID would think it's unprotected, but looks like it would be cosmetic. If hotplug is possible with P222, that would be desired thought.
  7. Anybody have seen high memory usage?, just had to kill the container because it was consuming 28GB of memory. 4 cameras, copy only (no enconding)
  8. user@nas03:/mnt/user/downloads/done$ ls -l /mnt/cache/appdata/emby/ total 0 drwxr-xr-x 1 user users 566 Aug 27 02:00 cache/ drwxr-xr-x 1 user users 130 May 25 15:37 config/ drwxr-xr-x 1 user users 670 Aug 30 03:17 data/ drwxr-xr-x 1 user users 2716 Aug 30 03:17 logs/ drwxr-xr-x 1 user users 24 May 17 12:13 metadata/ drwxr-xr-x 1 user users 676 May 25 15:39 plugins/ drwxr-xr-x 1 user users 54 Jun 16 00:00 plugins\\\\Statistics/ drwxr-xr-x 1 user users 14 May 17 00:38 root/ drwxr-xr-x 1 user users 0 Aug 30 03:17 transcoding-temp/
  9. It takes around 25 seconds from application launch to login stage. The data is on SSD cache, also any following retries take the same time. Once logged in, it's quite snappy.
  10. I've moved from RPM install on openSUSE to this container on unRAID, everything works fine from an ShieldTV client once it loads, but initial loading of the server connection is kinda slow (10's of seconds). I don't recall having this initial delay before, does it sound familiar?, is there anything I can tweak to fix that?
  11. Installing perl allowed me to install the plugin. May I suggest to comment that it's a requirement in the main information section of the "App"? (found a reference in the changelog, but to be honest, I didn't reach that far before attempting installation) Thanks!
  12. It's not part of the logs, only appears in the installation screen: plugin: installing: https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg plugin: downloading https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg plugin: downloading: https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg ... done plugin: run failed: /bin/bash retval: 1 Updating Support Links Finished Installing. If the DONE button did not appear, then you will need to click the red X in the top right corner
  13. Well, after the "exit 1" message, the plugin is not listed in the "installed plugin" section.
  14. Hello, What would be the cleanest way to handle bridging?, I would like to use two physical network cards, one with 10GbE as uplink to a switch which will pass several VLANs and a 6 ports gigabit card to connect other machines effectively acting as a switch. The usecase is the UnRAID box is a different room than the physical switch and in the same "remote" room I have other machines that need to reach the uplink switch through the NAS. Some of thow gigabit ports will be in access mode, others in trunk passing several VLANs too.
  15. Is this line expected?: Jun 6 16:53:49 nas03 root: plugin: skipping: /usr/local/emhttp/plugins/snmp/README.md already exists
  16. Well, I can confirm the UID keeps going back to the default one. Have to fix it from time to time... Just changed it at /boot/config/passwd, hope that fixes it.
  17. Installed a couple of other plugins without issues. From the logs: Jun 6 16:53:10 nas03 rpcbind[17554]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:53:15 nas03 emhttpd: cmd: /usr/local/emhttp/plugins/community.applications/scripts/pluginInstall.sh install https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg Jun 6 16:53:16 nas03 root: plugin: creating: /usr/local/emhttp/plugins/snmp/README.md - from INLINE content Jun 6 16:53:16 nas03 root: plugin: running: anonymous Jun 6 16:53:20 nas03 rpcbind[17623]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:53:30 nas03 rpcbind[17705]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:53:40 nas03 rpcbind[17788]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:53:49 nas03 emhttpd: cmd: /usr/local/emhttp/plugins/community.applications/scripts/pluginInstall.sh install https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg Jun 6 16:53:49 nas03 root: plugin: skipping: /usr/local/emhttp/plugins/snmp/README.md already exists Jun 6 16:53:49 nas03 root: plugin: running: anonymous Jun 6 16:53:51 nas03 rpcbind[17886]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:54:00 nas03 rpcbind[17924]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:54:10 nas03 rpcbind[18007]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:54:20 nas03 rpcbind[18043]: connect from 10.2.0.223 to getport/addr(mountd) Jun 6 16:54:30 nas03 rpcbind[18080]: connect from 10.2.0.223 to getport/addr(mountd)
  18. Thanks. I'll look around. Starting with "linuxserver" for the first 3...
  19. NFS is my main access protocol. Restarted the VM and it's working again. Had a remote Nextcloud server accessing a share while it crashed. Disabled cache for its share, at least for the moment (maybe a data mover thing?).
  20. So..., I should look for containers with most downloads and less bug reports?
  21. Hello!, Is there any caveat in changing the UID for an user (editing /etc/passwd + chown of its files)?, I need to make it match a UID on NFS client. Can that break something at the web administration layer or any convention (UID must be higher than ###) ? Regards.
  22. Hello!, I couldn't find a better section to post this question. I'm moving my home NAS from a DiY Linux setup to unRAID and I'm doing the psychological exercise of shutting down VMs and moving to unRAID managed containers. For Emby there seems to be an official container, but for NextCloud there are at least two, HomeAssistant has several around. What's the criteria to chose one or the other for the same application?.
  23. Hello, I'm moving data from an old linux NAS to unRAID 6.8.3, in the third day got this error: [Sat May 16 19:48:37 2020] shfs[12108]: segfault at 10 ip 000014bb1a525624 sp 000014badb5bac50 error 4 in libfuse3.so.3.9.0[14bb1a521000+18000] [Sat May 16 19:48:37 2020] Code: 7d 68 4c 89 ff e8 ec c7 ff ff 8b 85 00 01 00 00 85 c0 0f 85 4e 01 00 00 4c 89 ee 48 89 ef e8 83 d7 ff ff 4c 89 ff 48 8b 40 20 <4c> 8b 68 10 e8 43 c1 ff ff 45 31 c0 48 8d 4c 24 18 31 d2 4c 89 ee [Sat May 16 19:48:37 2020] ------------[ cut here ]------------ [Sat May 16 19:48:37 2020] nfsd: non-standard errno: -103 [Sat May 16 19:48:37 2020] WARNING: CPU: 7 PID: 29651 at fs/nfsd/nfsproc.c:820 nfserrno+0x47/0x4f [nfsd] [Sat May 16 19:48:37 2020] Modules linked in: nfsd lockd grace sunrpc xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 iptable_mangle ip6table_filter ip6_tables vhost_net tun vhost tap ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables xfs md_mod bonding pcc_cpufreq virtio_net net_failover i2c_i801 i2c_core mpt3sas ahci failover libahci intel_agp raid_class intel_gtt scsi_transport_sas virtio_scsi virtio_console agpgart button [last unloaded: md_mod] [Sat May 16 19:48:37 2020] CPU: 7 PID: 29651 Comm: nfsd Not tainted 4.19.107-Unraid #1 [Sat May 16 19:48:37 2020] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 [Sat May 16 19:48:37 2020] RIP: 0010:nfserrno+0x47/0x4f [nfsd] [Sat May 16 19:48:37 2020] Code: ff c0 48 83 f8 22 75 e1 80 3d 9a 06 01 00 00 41 bc 00 00 00 05 75 15 48 c7 c7 e3 c9 25 a0 c6 05 84 06 01 00 01 e8 eb f1 df e0 <0f> 0b 44 89 e0 41 5c c3 48 83 ec 18 31 c9 ba ff 07 00 00 65 48 8b [Sat May 16 19:48:37 2020] RSP: 0018:ffffc90000c73d20 EFLAGS: 00010282 [Sat May 16 19:48:37 2020] RAX: 0000000000000000 RBX: ffff888176594c08 RCX: 0000000000000007 [Sat May 16 19:48:37 2020] RDX: 00000000000004bd RSI: 0000000000000002 RDI: ffff88817bbd64f0 [Sat May 16 19:48:37 2020] RBP: ffffc90000c73de0 R08: 0000000000000003 R09: 0000000000016700 [Sat May 16 19:48:37 2020] R10: 0000000000000000 R11: 0000000000000040 R12: 0000000005000000 [Sat May 16 19:48:37 2020] R13: 0000000000000011 R14: ffff888176594c08 R15: ffff88800b1a8600 [Sat May 16 19:48:37 2020] FS: 0000000000000000(0000) GS:ffff88817bbc0000(0000) knlGS:0000000000000000 [Sat May 16 19:48:37 2020] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [Sat May 16 19:48:37 2020] CR2: 0000000000000010 CR3: 0000000176728000 CR4: 00000000000006e0 [Sat May 16 19:48:37 2020] Call Trace: [Sat May 16 19:48:37 2020] fill_pre_wcc+0x6b/0x155 [nfsd] [Sat May 16 19:48:37 2020] nfsd_create+0xd8/0x182 [nfsd] [Sat May 16 19:48:37 2020] nfsd3_proc_mkdir+0x9b/0xdf [nfsd] [Sat May 16 19:48:37 2020] nfsd_dispatch+0xb2/0x163 [nfsd] [Sat May 16 19:48:37 2020] svc_process+0x4fd/0x6b7 [sunrpc] [Sat May 16 19:48:37 2020] nfsd+0xea/0x141 [nfsd] [Sat May 16 19:48:37 2020] ? nfsd_destroy+0x48/0x48 [nfsd] [Sat May 16 19:48:37 2020] kthread+0x10c/0x114 [Sat May 16 19:48:37 2020] ? kthread_park+0x89/0x89 [Sat May 16 19:48:37 2020] ret_from_fork+0x35/0x40 [Sat May 16 19:48:37 2020] ---[ end trace 84f742d85a5c98c6 ]--- And clients cannot access the FS anymore. Probably I can stop/start disks, but I'm concerned there's something fishy hidden, any hints? The disks don't report any error at this point in time (no apparent physical failure): nas03-diagnostics-20200516-2013.zip
  24. I'm having issues installing the plugin in 6.8.3, all I get is: plugin: run failed: /bin/bash retval: 1