Matt4982

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Matt4982

  1. So, I recently went through and moved everything off cache to reformat it for the large write fix. I've since moved everything back and now my CPU usage is very high. I looked at processes and have this guy running high root 6342 200 0.0 1777300 42356 ? Ssl 20:31 6:12 /usr/local/sbin/shfs /mnt/user -disks 2047 -o noatime,allow_other -o remember=330 Attached are the diags, not sure what is going on. unraid-diagnostics-20201008-2036.zip EDIT: I started stopping dockers one by one and it appears to be related to the syncthing docker. Going to look in to seeing what is going on with it.
  2. Add me to the list of wanting SAS spindown, thats all I use in my 12 bay.
  3. Same thing for me, BRTFS unencrypted pool. 14 days uptime I have nearly 100 million writes. iotop shows mostly coming from loop2
  4. I currently have unraid setup in a 12 drive setup, two of those being parity. They are all 1 tb, so I'm replacing them with larger drives and thinking about shrinking the number of drives. My thought process on the easiest way to do this is Build new array without parity drive (will create the parity drive once all the data is copied) Mount the drive and copy the contents from each 1tb drive individually to the new array Once all the content is copied, add a parity drive Profit? Will this work for keeping everything the same as far as dockers/settings/drive content/etc?
  5. If I move them manually from the zip, so I need to do anything special to retain the proper permissions? The zip is currently on a different source I worry about doing a restore through can backup and restore since some of my Dockers are already off the cache drive and the backup will now be a few days old
  6. So, at this point I feel pretty hosed and might be beyond my knowledge. The appdata folder is on teh cache drive, and since its getting read errors, most of the docker containers aren't working. I tried to use the mover server, but it doesnt appear to be working. I also tried going in through midnight commander and moving the appdata, but it is timing out. I did a backup via CA Backup and Restore prior to all this, so now I'm wondering what the easiest method would be to just restoring the appdata on the array.
  7. Ok, new version installed. Now getting this error over and over Feb 22 10:24:50 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:50 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:51 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:51 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:52 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:52 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:53 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 15997 start 0 Feb 22 10:24:53 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 15997 start 0 Feb 22 10:24:53 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:53 Unraid kernel: BTRFS info (device nvme0n1p1): no csum found for inode 12700 start 0 Feb 22 10:24:54 Unraid kernel: btrfs_dev_stat_print_on_error: 122 callbacks suppressed Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7316, flush 0, corrupt 0, gen 0 Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7317, flush 0, corrupt 0, gen 0 Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7318, flush 0, corrupt 0, gen 0 Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7319, flush 0, corrupt 0, gen 0 Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7320, flush 0, corrupt 0, gen 0 Feb 22 10:24:54 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7321, flush 0, corrupt 0, gen 0 Feb 22 10:24:55 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7322, flush 0, corrupt 0, gen 0 Feb 22 10:24:55 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7323, flush 0, corrupt 0, gen 0 Feb 22 10:24:55 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7324, flush 0, corrupt 0, gen 0 Feb 22 10:24:55 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 7325, flush 0, corrupt 0, gen 0 unraid-diagnostics-20190222-1626.zip
  8. Alright, so I got a new card and switched to an NVME drive. It allowed me to format after some struggles and got setup, however, the dockers are now crawling. I looked through the logs and am seeing tons of timeouts for the drive. Not sure if I'm missing something that needs to be done. Feb 22 00:15:50 Unraid kernel: nvme nvme0: I/O 927 QID 16 timeout, completion polled Feb 22 00:15:50 Unraid kernel: nvme nvme0: I/O 928 QID 16 timeout, completion polled Feb 22 00:15:50 Unraid kernel: nvme nvme0: I/O 929 QID 16 timeout, completion polled Feb 22 00:24:32 Unraid kernel: nvme nvme0: I/O 24 QID 0 timeout, completion polled Feb 22 00:25:34 Unraid kernel: nvme nvme0: I/O 5 QID 0 timeout, completion polled Feb 22 00:46:03 Unraid kernel: nvme nvme0: I/O 2 QID 0 timeout, completion polled Feb 22 00:47:04 Unraid kernel: nvme nvme0: I/O 29 QID 0 timeout, completion polled Feb 22 00:48:12 Unraid kernel: nvme nvme0: I/O 929 QID 16 timeout, completion polled Feb 22 00:48:12 Unraid kernel: nvme nvme0: I/O 930 QID 16 timeout, completion polled unraid-diagnostics-20190222-0836.zip
  9. Yep, the second one is the one currently in the system. Of course the array is also running on an H200 flashed to IT mode.
  10. These are the two cards I've tried. https://www.amazon.com/gp/product/B01452SP1O/ref=oh_aui_search_asin_title?ie=UTF8&psc=1 https://www.amazon.com/gp/product/B00WUZPMHE/ref=oh_aui_search_asin_title?ie=UTF8&psc=1
  11. Do you have a suggestion for a card without port multipliers? I guess I'm not entirely sure what would be the difference. Unfortunately with the 12bay R510, the internal SATA ports are disabled and you'd have to splice to get power, so that's why I was looking at pci-e cards that did both data and power.
  12. Attached is the diags zip. I have tried two different pcie controllers in my Dell R510, so hopefully thats not the case. unraid-diagnostics-20190215-1154.zip
  13. Hey guys, just isntalled an ssd in my unraid setup. I took the drive from a working laptop. When I try to format it either through unraid or the unassigned devices plugin, it fails to do so. Here's a copy of the log coming up. Feb 15 11:45:39 Unraid kernel: ata14.00: status: { DRDY ERR } Feb 15 11:45:39 Unraid kernel: ata14.00: error: { ABRT } Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: configured for UDMA/133 Feb 15 11:45:39 Unraid kernel: ata14: EH complete Feb 15 11:45:39 Unraid kernel: ata14.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Feb 15 11:45:39 Unraid kernel: ata14.00: irq_stat 0x40000001 Feb 15 11:45:39 Unraid kernel: ata14.00: failed command: READ DMA Feb 15 11:45:39 Unraid kernel: ata14.00: cmd c8/00:08:00:00:00/00:00:00:00:00/e0 tag 10 dma 4096 in Feb 15 11:45:39 Unraid kernel: res 51/04:08:00:00:00/00:00:00:00:00/e0 Emask 0x1 (device error) Feb 15 11:45:39 Unraid kernel: ata14.00: status: { DRDY ERR } Feb 15 11:45:39 Unraid kernel: ata14.00: error: { ABRT } Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: configured for UDMA/133 Feb 15 11:45:39 Unraid kernel: ata14: EH complete Feb 15 11:45:39 Unraid kernel: ata14.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Feb 15 11:45:39 Unraid kernel: ata14.00: irq_stat 0x40000001 Feb 15 11:45:39 Unraid kernel: ata14.00: failed command: READ DMA Feb 15 11:45:39 Unraid kernel: ata14.00: cmd c8/00:08:00:00:00/00:00:00:00:00/e0 tag 11 dma 4096 in Feb 15 11:45:39 Unraid kernel: res 51/04:08:00:00:00/00:00:00:00:00/e0 Emask 0x1 (device error) Feb 15 11:45:39 Unraid kernel: ata14.00: status: { DRDY ERR } Feb 15 11:45:39 Unraid kernel: ata14.00: error: { ABRT } Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: supports DRM functions and may not be fully accessible Feb 15 11:45:39 Unraid kernel: ata14.00: NCQ Send/Recv Log not supported Feb 15 11:45:39 Unraid kernel: ata14.00: configured for UDMA/133 Feb 15 11:45:39 Unraid kernel: ata14: EH complete Feb 15 11:45:39 Unraid kernel: sdb: unable to read partition table Feb 15 11:45:39 Unraid unassigned.devices: Reload partition table result: BLKRRPART failed: Input/output error /dev/sdb: re-reading partition table Feb 15 11:45:39 Unraid unassigned.devices: Formatting disk '/dev/sdb' with 'xfs' filesystem. Feb 15 11:45:39 Unraid unassigned.devices: Format disk '/dev/sdb' with 'xfs' filesystem failed! Result:
  14. The SSD is actually connected to that PCI-E card that I purchased from amazon. The SSD was pulled from a functional box, so maybe I need to test the PCI-E in a seperate system.
  15. Just bought a PCI-E card from Amazon, attached an SSD to it, but when trying to add it to unraid, it is coming up "Unsupported partition layout". I have tried the format option, but it is not giving me any luck. This is the first time I have had a cache drive with the unraid system. Attached are my diags. Hardware is a Dell 510 with H200 flashed in IT mode. unraid-diagnostics-20180720-1754.zip
  16. Fortunately trakt saved my butt. I had it setup and was able to pull down what I had watched...hooray
  17. Looks like they might be coming in now. Unfortunately it looks like the trash was emptied at some point last night so the watched status of everything has been removed. :(.
  18. No, I left all of that the same. Whats odd with that last screenshot is that all of that media is in the same TV folder, just seperated there by series name. I can't find any rhyme or reason to why some of them are not showing as available. The movies, which are also located under that Media folder, are showing up properly.
  19. I'm changing it by clicking the the edit button under media
  20. So I changed the capitlization and it showed the trashcan like it couldnt find anything. I changed it back and now it is seeing some programs, but not all of them.
  21. So, I've been using the limetech plex docker and decided to switch. I setup the linuxserver plex docker, but it is having troubles reading the media folder and I'm not sure why. Here are the settings on the old, working docker Here are the settings for the new plex docker The config file worked properly as it shows everything in the library thats there and has been watched, however when you try to play something it is showing that it is unavailable, same thing when trying to record tv.