Aquamac

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Aquamac's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I wanted to update this thread in case anyone else is experiencing the same issue as described above. My configuration: Various mechanical drives make up the array 1 cache pool (raid0 -2 Samsung 850 evo SSD's) for cacheing to the server <--- this pool works without issue even when used as appdata storage 1 cache pool (1 Samsung 870 evo SSD) for appdata <--- this pool causes errors when both 870's are set up as raid Various SSD's as unassigned devices The only time I experience drive errors is when the cache pool for appdata is configured as raid (either raid0 or raid1). As long as the appdata cache pool is made up of one drive, everything works as expected. I've tested each 870 evo individually in the appdata pool, using both btrfs and xfs without issue. I would really like to use both 870's in a raid1 configuration for my appdata cache but can't get past the drive errors. Any direction would be appreciated.
  2. Hey Squid, thanks for the quick response, I already tried this but will try again. This issue seems to manifest itself overnight. FYI, all drives are in cages installed in server hot swap slots in a dedicated rack so they aren't susceptible to being bumped.
  3. I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool. Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage. Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check. I've had access to dockers lock up, array stalls, or the UI fail to respond. Looking at the logs, it appears I'm having write issues to the new apps cache. I broke the cache and ran a preclear on both new drives; both test completed without error. Thanks in advance jupiter-diagnostics-20210310-0700.zip jupiter-syslog-20210310-0648.zip
  4. I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool. Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage. Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check. I've had access to dockers lock up, array stalls, or the UI fail to respond. Looking at the logs, it appears I'm having write issues to the new apps cache. I broke the cache and ran a preclear on both new drives; both test completed without error. Hoping someone with a bigger brain than mine can give me direction. Thanks in advance jupiter-diagnostics-20210310-0700.zip jupiter-syslog-20210310-0648.zip
  5. Hey guys. I've been using habridge for a while and it has been dependable. It no longer works and I'm trying to figure out why. When I connect habridge to my fibaro controller, all devices and scenes are imported. I just control devices through habridge using the test option. When I log into the alexa portal and perform a discover, nothing is found. I've rebuilt habridge from scratch. I've logged out and back into the alexa portal. I've looked through the logs but from what I can tell, nothing seems out of sorts. Any help or direction would be appreciated. If I need to provide any info, please let me know what is required.
  6. I have the letsencrypt docker up and running but don't really understand how to use it. I have a nextcloud docker running but can't find any documentation explaining how to request or download a cert and apply it to nextlcoud. I'm new to dockers and have little linux knowledge but would love some direction. I don't mind doing the research but can't seem to find a starting point. Any help would be greatly appreciated.