Jump to content

unevent

Members
  • Content Count

    895
  • Joined

  • Last visited

Everything posted by unevent

  1. Can you explain further what you want to accomplish?
  2. Will this be posted somewhere so can fast forward/skip through? Watched the first 30mins before having to leave and didn't really get anything out of that.
  3. Is that per year? Was not aware Zoho did email aliases, they only list domain aliases in their pricing. Seems odd since they charge per user, assumed they would want to limit alias capability. Happy with mxroute so far, got on a lifetime deal for $99. Unlimited domains and email I assume for as long as they are in business or don't change the plan. They seem to be small and can be a bit aggressive in locking down abusers so their website wording reflects that. Other than that, I have a couple domains there with about ten email accounts so far. Domain DNS configuration would have been a pain as they don't really offer much in the way of instructions, but after setting up with Zoho on a trail basis got it all figured out using their instructions for DKIM, SPF, etc. config. Edit: Without doing the lifetime promo, their lowest tier is $40/year for 10GB storage and unlimited domains and unlimited email accounts.
  4. To update my search results for email hosting, I have signed on with mxroute for my email needs. They offer unlimited domains and unlimited email users with their plans. Pretty good experience so far, have two domains set up with email. Thanks again to those that replied.
  5. Just an FYI for those running BOINC with Snort/Suricata, apparently couple days ago a new rule was published that was blocking COVID-19 transfers with BOINC ET INFO Suspicious GET Request with Possible COVID-19 URI M1 I hadn't had any work on two machines in two days and the transfer queue was in continuous retry. Seems there is some malware going around which is why that rule was created and propagated. This particular case was directly tied to BOINC as the trigger so I have suppressed it and now back to crunching. If you are out of work units and the transfer queue is full, something to look into depending on what monitoring you have on your network.
  6. Thanks all. GSuites a bit much for home use ($6/user if I read it right). Going with Zoho for the short term even though it is more limited than I had with godaddy hosting. Free version has 5 users, but with only one having pop/imap access, the others are webmail only. Upgrade is $1/user to give all pop/imap access paid yearly so $60/year for 5 users which is ridiculous. Will look into vps and see about rolling my own and if its possible.
  7. For Zoho, you using the free or paid? If free, do you have access to pop/imap or just webmail?
  8. Place I am with now is getting expensive ($11/month) to host a domain for just email. Looking for suggestions for hosts that allow to bring in my own domain and have ability to host email with multiple non-business users/addresses. Thanks
  9. Drive issue not related to 6.8.2 upgrade as I did xfs check on all drives in maintenance mode before bringing array up after upgrade. Got a read error on disk 2 on array start so I rebooted. Drive came up and normal, but of course it is now disabled. No smart errors so just something the repair did (but no lost+found either on that disk either). Only options I have in 6.8.2 to bring the array online is to do a 'read check' which is a no-correct parity operation. Question is why does it force this as the only option instead of allowing a disk rebuild on the disabled drive? This seems new as I don't recall seeing this behavior on 6.7.2 and older. I need to rebuild the disk not read the emulated disk contents, or am I missing something? Thanks
  10. Thanks for a reply. Nothing I do does a ssh connection at 2am and nothing I do has a sustained open ssh connection to be disconnected. Don't even use ssh to unraid - use telnet from inside local network. Will disable it.
  11. Any idea what this might be? I was not up at 2am using ssh and no ports are open to unraid gui. Except for those two entries the logs are clean going back to the 15th. Doesn't look like a hack since only two attempts of whatever this is in almost nine days of uptime. Is this an internal ssh thing or something external triggered these log entries? "kex_exchange..." didn't net much in a search. Thanks Jan 23 02:00:31 Tower sshd[13206]: error: kex_exchange_identification: Connection closed by remote host Jan 23 02:00:32 Tower sshd[13281]: error: kex_exchange_identification: Connection closed by remote host
  12. Not sure if reported yet. When formatting ext4 says 64-bit filesystem support is not enabled: Jan 13 20:32:06 Tower unassigned.devices: Format disk '/dev/nvme0n1' with 'ext4' filesystem result: mke2fs 1.45.0 (6-Mar-2019) 64-bit filesystem support is not enabled. The larger fields afforded by this feature enable full-strength checksumming. Pass -O 64bit to rectify. Discarding device blocks: 4096/250050816#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010 629555{.....}done
  13. lol...yes and second sentence further from that was I started the array normally.
  14. I have the logs. Just had no need to post drive contents to the world. Lost 1.25TB on this drive. Perhaps a feature request is to halt parity when xfs reports first error so instead of parity protecting the corruption it keeps the parity intact as much as possible. Seven plus years with unRaid and first case of corruption and with xfs and data loss. No power loss, no unclean shutdowns.
  15. Thanks for the help. I had started array in maintenance mode and still showed unmountable. Remember stories inthe past about starting array with a troubled disk and loosing data. Started array normally and now sorting through lost+found to see what I need to replace. Thanks again.
  16. Ran xfs_repair without options and completed on disk2. Will look in lost+found later today to see what was trashed. What would be recommended path to get back online at this point? Never had corruption and been using since 4.7 days. Never even had parity sync issue.
  17. Looking for advice on cause and how to do xfs_repair without causing further damage. Oldest diagnostics syslog had bunch of errors, looks to be caused after file system was taken offline after xfs error. Powered down and checked SATA cables, brought back up and disk 2 was unmountable and left it for advice to minimize loss. Edit: Ran xfs_repair -n on disk2 and it is a mess (1900+ lines). Last parity sync was 14 days ago and was clean. Format and rebuild? Replace disk2 and rebuild?
  18. http://www.frozencpu.com/ Look under Connectors - 4 Pin - Molex Never bought from them and not affiliated, but looks like they have what you are looking for.
  19. Is 'Direct Unpack' checked under switches?
  20. Got your message, will look at the other thread.
  21. If you think the changes you made to the sab switches caused it then undo them.
  22. Never really a bad idea to go for more CPU so that is up to you. My point was it won't fix that immediate problem you were seeing when running the mover. Not sure how many streams you can hardware transcode at a time so more CPU for Plex is not a bad idea. Plex says it takes roughly 2000 passmark (CPU) to soft transcode a 1080 @10Mbit stream.
  23. The mover is very aggressive and really should be run overnight on a schedule (settings-scheduler). The CPU going to 100% is not necessarily an issue unless it causes problems with something else then you deal with it by isolating or taming. I've run much more on much less processor so it really is just a matter of tuning your loads. The diagnostics (top.txt) show you were running mover which had top reporting 26.6% IO wait and 63.8% CPU idle with system load of 9.01, 9.33, 7.11. Not CPU bound, it is IO bound. More CPU won't fix that particular issue. Linux reports loading different than Windows, it includes IO in the load calc (waiting).
  24. I have /mnt/cache/.apps/calibre_library/ and /mnt/cache/.appdata/nginx/ as the only maps.