aberg83

Members
  • Posts

    141
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

aberg83's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Every few hours, I see the following messages in my syslog: Nov 24 09:36:59 ur1 rsync[6282]: connect from 192.168.2.1 (192.168.2.1) Nov 24 09:36:59 ur1 vsftpd[6281]: connect from 192.168.2.1 (192.168.2.1) Nov 24 09:36:59 ur1 rsyncd[6282]: forward name lookup for DreamMachine.localdomain failed: Name or service not known Nov 24 09:36:59 ur1 rsyncd[6282]: connect from UNKNOWN (192.168.2.1) Nov 24 09:37:10 ur1 smbd[6284]: [2021/11/24 09:37:10.442874, 0] ../../source3/smbd/process.c:341(read_packet_remainder) Nov 24 09:37:10 ur1 smbd[6284]: read_fd_with_timeout failed for client 192.168.2.1 read error = NT_STATUS_END_OF_FILE. Nov 24 09:39:22 ur1 vsftpd[7804]: connect from 192.168.6.1 (192.168.6.1) Nov 24 09:39:22 ur1 rsync[7805]: connect from 192.168.6.1 (192.168.6.1) Nov 24 09:39:23 ur1 rsyncd[7805]: forward name lookup for DreamMachine.localdomain failed: Name or service not known Nov 24 09:39:23 ur1 rsyncd[7805]: connect from UNKNOWN (192.168.6.1) Nov 24 09:39:33 ur1 smbd[7807]: [2021/11/24 09:39:33.981382, 0] ../../source3/smbd/process.c:341(read_packet_remainder) Nov 24 09:39:33 ur1 smbd[7807]: read_fd_with_timeout failed for client 192.168.6.1 read error = NT_STATUS_END_OF_FILE. 192.168.2.1 is my LAN gateway IP. 192.168.6.1 is a VLAN gateway IP for the VLAN on my UniFi network that all my docker containers are isolated on. I have firewall rules that prevent communication from the docker VLAN to my LAN. I have WireGuard running on Unraid and setup a static route as well as allowed host communication with docker containers using custom networks as recommended in setup instructions. Any ideas what is causing these constant connection attempts?
  2. Thanks. That's a good script to have running. Any other recommended scripts to schedule when using btrfs? I have scrubs scheduled already.
  3. Thanks. I'll keep an eye on things. I have a scrub scheduled for Dec 1st. Does unraid send a notification for btrfs errors? I didn't get any with this recent error.
  4. Ok, I found the affected file. Deleted and restored from a backup. I assume the process now is to run a scrub and see if it comes back clean for disk 1? Not sure what caused the file corruption.
  5. My server recently locked up during update of a few docker containers. I was able to save a diagnostic. It appears that my docker.img had some corruption. I've since deleted it, created a new one and added back all my containers. I also ran a BTRFS scrub on my cache drive and it came back clean. What is concerning is that disk 1 in my array which has BTRFS filesystem reported 1 corruption. Nov 21 08:09:19 ur1 kernel: BTRFS warning (device dm-0): csum failed root 273 ino 9248 off 6311936 csum 0x1a753a94 expected csum 0x58a4bd31 mirror 1 Nov 21 08:09:19 ur1 kernel: BTRFS error (device dm-0): bdev /dev/mapper/md1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 btrfs dev stats -c /mnt/disk1 [/dev/mapper/md1].write_io_errs 0 [/dev/mapper/md1].read_io_errs 0 [/dev/mapper/md1].flush_io_errs 0 [/dev/mapper/md1].corruption_errs 1 [/dev/mapper/md1].generation_errs 0 I'm currently running a BTRFS scrub on disk 1 to see if I will learn what file has issues. I can restore from a backup if necessary. I'm not sure if scrubbing will give me this information. This experience has me questioning my choice to use BTRFS. Advice would be greatly appreciated. Thanks. ur1-diagnostics-20211122-0833.zip
  6. I have a g10+. I'm currently reducing size of my media collection to attempt to consolidate onto 2 data disks with 1 parity disks. This will leave 1 of the 4 drive bays open for an SSD cache. My PCIe slot will be occupied by an Nvidia P400 GPU for plex and tdarr. I love the small form factor and build quality of the g10+ even though it limits drive expansion. With 16TB and 18TB drives out now, this is less of an issue.
  7. This wouldn't be an issue for me. I may give this a go.
  8. If this were used only for docker, would it be a concern? I have an HPE gen 10 plus with only 4 storage bays and the single PCIe slot will be populated with my GPU for HW transcoding. Therefore, no additional drive slots other than to use USB. Luckily it has two gen 3.2 slots on the front. I would just use a single USB SSD for docker on an unassigned device and backup weekly to the array. People use these as OS boot drives, so I'm thinking it would be okay. Thoughts?
  9. I figured out my issue. All is well. Thanks.
  10. I've tried everything to get this working between two unraid servers but no matter what I try, I keep getting prompted for a password. Did something change in 6.9 that does not allow this for root users?
  11. Thanks. I actually got it worked by moving my docker containers to a separate VLAN. I have unifi network gear so this was pretty easy.
  12. I've got an hpe gen10+. It has 4 drive bays and 1 PCIe slot. The PCIe is populated with a GPU for hw transcoding. I am current using the server mainly as a docker/VM host with bulk media storage on a Synology. Eyeing some 16tb exos drives and considering buying 4 for the server but then I lose drive bays for my Unraid cache and VM/docker storage. The server has two usb 3.2 gen 2 ports. Would I be nuts to consider buying two external gen 2 usb SSDs to run my cache pool? Looks like near nvme speeds can be had with these but don't know about reliability
  13. Power is definitely not cheap in Canada, but the Xeon is still pretty low power compared to the dual Xeon rackmount servers I used to run.
  14. I'm using one with Unraid. Working nicely. I wish it had onboard nvme but otherwise it's great.