Matt_G

Members
  • Posts

    46
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Colorado (USA)

Matt_G's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Thank you! I must be brain dead this morning. Totally forgot about the Wiki...
  2. I have a box running UnRaid 6.9.2. I am out of SATA ports and want to remove a disk from the mirrored cache pool so I can add another data drive to the array. Can someone point me to the proper procedure to do this? My searching skills are obviously lacking because I'm not finding much on this subject...
  3. Gentlemen, everything is back up and running just dandy thanks to all your help. A big Thank You to all three of you. I learned a few things as well, which is always a good thing. Wishing you and yours a Very Merry Christmas!
  4. How can I force a format on those two drives? I am not seeing a way to do that via the GUI. I thought maybe XFS would be an option there but it isn't.
  5. dmesg | tail shows this: [ 9140.519458] BTRFS info (device loop2): forced readonly [ 9140.519460] BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure [ 9140.519727] loop: Write error at byte offset 13172736, length 4096. [ 9140.519729] print_req_error: I/O error, dev loop2, sector 25728 [ 9140.519731] BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 [ 9140.519754] loop: Write error at byte offset 12914688, length 4096. [ 9140.519755] print_req_error: I/O error, dev loop2, sector 25224 [ 9140.519758] BTRFS error (device loop2): bdev /dev/loop2 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 [ 9140.519780] BTRFS error (device loop2): pending csums is 12288 [ 9140.522486] BTRFS error (device sdg1): pending csums is 1572864 Are these drives toast? They are 3.5 years old with over 25,000 hours on them.
  6. So I started the array in maintenance mode and checked the btrfs filesystem. I like that line that says cache appears valid but isn't. WTH?
  7. Thanks for the reply @johnnie.black Read that thread and tried balancing the cache drive. No joy, it errors out due to being a read only file system. Rebooted the server and immediately tried to re-balance the cache drive and it seemed to run for about 30 seconds. Then it thru the read only filesystem error. I assume at this point I need to copy the data off the cache and then format the drives and re-create the cache pool? Basically follow the instructions here? https://wiki.unraid.net/Replace_A_Cache_Drive
  8. Diagnostics attached. unraid-diagnostics-20191215-2120.zip
  9. Yesterday, I started having issues with Docker. I was trying to logon to Crashplan and it wouldn't let me logon, even with the correct password. FCP is stating that the system has 2 issues: 1) "Unable to write to Docker Image" and that "The Docker image is full or corrupted" AND 2) Stating that the system is "Unable to write to cache" and that the "Drive mounted read-only or completely full." Troubleshooting steps attempted so far: 1) Disabled Docker all together, and tried to delete the docker.img file using SSH. No joy, it states that the file system is read-only and will not allow me to delete the docker.img file. (/mnt/cache/docker.img) At that point I tried changing the size, which I could do, but when I attempt to enable Docker, it states the service failed to start. Not sure where to go from here. What should my next step be? BTW, I am on version 6.7.2 The cache consists of 2 Samsung_850_EVO SSD's in a BTRFS mirror.
  10. Doubt that will ever happen as a native implementation. The Linux kernel is licensed under the GNU General Public License version 2. (GPL-2.0) OpenZFS is licensed under the Common Development and Distribution License. (CDDL) The two just aren't compatible with each other.
  11. Glad to hear this helped you Amin.
  12. Yes, is sure does. I noticed bonienl suggested going to 6.5.1-rc3. I'll wait for it to go stable. Like I said, everything appears to be working. I just can't trust what the GUI says on the Docker tab...
  13. Just updated to 6.5.0 from 6.4.1 and my dockers appeared to be hosed. Kept getting "No such container" errors. As a test I uninstalled Netdata and Plex and then reinstalled using CA and I still got an execution error when trying to start the dockers. No such container. Then I noticed that the Dashboard said my Plex and Crashplan Pro were running. At this point I was a bit confused. Docker tab says they are stopped and I can't start them but Dashboard says they are running. WTH... So I went in and looked at the settings on my Plex docker and noticed the network had been set to "none". Put Plex back to br0 and set the IP back to what it should be and now that's working. Crashplan Pro was already set correctly and it is working as well. What's really weird is I can start and stop dockers from the dashboard but the docker tab of the GUI is hosed. All dockers show as stopped on that page and I keep getting that no such container error. What's up with that? Should I just blow away my docker.img file and let it build a new one? Diags attached in case anyone wants a peek at 'em. unraid-diagnostics-20180404-1821.zip
  14. See, if you beat me over the head with a stick long enough, I will eventually get a damn clue... Many thanks for your patience with this old man.
  15. Yes, so I just need to re-select everything there, correct? Sorry for being a pain in the rearend...