Frank76

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Frank76's Achievements

Noob

Noob (1/14)

0

Reputation

  1. It was working before I updated all my dockers in preparation for upgrading to 6.7, now I can't get Rocket.Chat working again. My MongoDB seems to be running just fine, but rocket.chat is giving an error on startup saying MongoError: not master and slaveOk=false I have found some documentation on how to allow reading from the slave, but it is per connection, and RC creates a new connection and I can't figure out how to set that variable for RC to use. I'm getting very close to just installing it on my centos VM and be done with it.
  2. Sure... I see the problem. The echo line they said to in the link that I sent is supposed to be done from the command line, which ends up in the config file like: replication: replSetName: "rs01" here's my full mongod.conf: # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: /data/db journal: enabled: true # engine: # mmapv1: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 # how the process runs processManagement: timeZoneInfo: /usr/share/zoneinfo #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: replication: replSetName: "rs01"
  3. I was able to get it working by doing the following: Click on the MongoDB docker icon and select console cp /etc/mongod.conf* /data/db/ ( I renamed it before copying, I think it was called mongod.conf-sample ) exit the MongoDB console go to /mnt/user/appdata/mongodb and edit the sample conf changing the dbPath: to /data/db and the modifications to the conf file noted in https://rocket.chat/docs/installation/manual-installation/mongo-replicas/ rename the file to mongod.conf modify the mongodb docker application Select advanced view Add "-f /data/db/mongod.conf" to the post arguments and restart the mongodb docker go back into the mongodb console, and run mongo and run rs.initiate() then modify the rocket.chat docker container adding the variable: MONGO_OPLOG_URL: mongodb://<SERVER>:27017/local?replSet=rs01 restart rocket.chat and it should be working fine.
  4. I haven't had an issue since updating 2 days 6 hours ago. Looks like the issue has been resolved. Thank you very much for the quick resolution!
  5. Thanks for the link. I thought I was on to something because once I disabled the block level backups, it was looking like it was working. It actually finished a few jobs. but sadly it just died on me again after less than 24 hours. I will downgrade as soon as I can. I'm not really clear on the procedure to downgrade. Do I extract the zip file and overwrite what is in the /boot/previous directory, then go into the webui, tools, update os and select previous version? Thanks!
  6. I just did another test, and it took a whopping 17 minutes before the storage locked up again. I'm using Cloudberry from a vm, with the data source being a nfs mounted unraid share, and the destination is an automounted sshfs filesystem running to a disk at a friend's house. It has been working well for a few months now until recently. I'm also wondering if the new feature block level backup feature of cloudberry that could be compounding the issue. I'll disable the block level backup and see if I can get a successful backup.
  7. Last night I disabled the Direct IO, and rebooted, and woke up this morning to the same issue. It seems like my backups are too IO intensive for NFS to handle. I'm getting very close to downgrading, but I have lost the ability to go back to 6.5.x (at least easily). In the mean time I will disable my backups. Hopefully that will prevent my server from crashing again.
  8. Sadly it has happened again since updating to 6.6.1.
  9. I came here because I've been having problems getting my backup completed all day. This is the first time I've had to reboot my server aside from upgrades (and the extended power outage like last weekend). I have set the fuse_remember to 0, and I will report back if I'm actually able to complete my backup. Here's the trace from my last crash: [ 2714.619041] ------------[ cut here ]------------ [ 2714.619043] nfsd: non-standard errno: -103 [ 2714.619077] WARNING: CPU: 1 PID: 10676 at fs/nfsd/nfsproc.c:817 nfserrno+0x44/0x4a [nfsd] [ 2714.619078] Modules linked in: xt_nat xt_CHECKSUM iptable_mangle ipt_REJECT ebtable_filter ebtables ip6table_filter ip6_tables vhost_net tun vhost tap veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs nfsd lockd grace sunrpc md_mod i915 i2c_algo_bit iosf_mbi drm_kms_helper drm intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops it87 hwmon_vid bonding e1000e r8169 mii x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd ahci libahci glue_helper intel_cstate intel_uncore i2c_i801 video intel_rapl_perf i2c_core backlight thermal button acpi_pad fan pcc_cpufreq [last unloaded: e1000e] [ 2714.619113] CPU: 1 PID: 10676 Comm: nfsd Not tainted 4.18.8-unRAID #1 [ 2714.619114] Hardware name: Gigabyte Technology Co., Ltd. Z97- HD3P/Z97-HD3P, BIOS F2 09/17/2014 [ 2714.619118] RIP: 0010:nfserrno+0x44/0x4a [nfsd] [ 2714.619118] Code: c0 48 83 f8 22 75 e2 80 3d b3 06 01 00 00 bb 00 00 00 05 75 17 89 fe 48 c7 c7 3b 6a 42 a0 c6 05 9c 06 01 00 01 e8 8a 1c c3 e0 <0f> 0b 89 d8 5b c3 48 83 ec 18 31 c9 ba ff 07 00 00 65 48 8b 04 25 [ 2714.619140] RSP: 0018:ffffc90001d53dc0 EFLAGS: 00010282 [ 2714.619142] RAX: 0000000000000000 RBX: 0000000005000000 RCX: 0000000000000007 [ 2714.619143] RDX: 0000000000000000 RSI: ffff88041fa56470 RDI: ffff88041fa56470 [ 2714.619144] RBP: ffffc90001d53e10 R08: 0000000000000003 R09: ffffffff8220a800 [ 2714.619144] R10: 00000000000003d4 R11: 0000000000012ddc R12: ffff88040908a008 [ 2714.619145] R13: 000000008de30000 R14: ffff88040908a168 R15: 0000000000000002 [ 2714.619146] FS: 0000000000000000(0000) GS:ffff88041fa40000(0000) knlGS:0000000000000000 [ 2714.619147] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2714.619148] CR2: 000000000094e690 CR3: 0000000001e0a006 CR4: 00000000001626e0 [ 2714.619148] Call Trace: [ 2714.619153] nfsd_open+0x15e/0x17c [nfsd] [ 2714.619157] nfsd_read+0x45/0xec [nfsd] [ 2714.619161] nfsd3_proc_read+0x95/0xda [nfsd] [ 2714.619164] nfsd_dispatch+0xb4/0x169 [nfsd] [ 2714.619170] svc_process+0x4b5/0x666 [sunrpc] [ 2714.619173] ? nfsd_destroy+0x48/0x48 [nfsd] [ 2714.619175] nfsd+0xeb/0x142 [nfsd] [ 2714.619179] kthread+0x10b/0x113 [ 2714.619181] ? kthread_flush_work_fn+0x9/0x9 [ 2714.619183] ret_from_fork+0x35/0x40 [ 2714.619185] ---[ end trace 94c2c1298e7ff70a ]--- Update: Nope, the fuse_remember set to 0 option didn't help at all