Klainn

Members
  • Posts

    24
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Klainn's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I would like to request strace and any required deps be added if possible. Thank you!
  2. Would you mind filling us in on what you figure out? I'm having this same struggle right now.
  3. Managed to replace those controllers. All the drives were in their proper locations when I powered up the machine with the new hba's. Could not have gone smoother. Took me longer to find a vga monitor cable so I could watch the post process than it did to swap the cards, boot up and verify no changes had to be made. I will add, the LSI MPT2 Bios on these controllers is slow to cycle. By slow I mean it will likely cause you anxiety (5 or more minutes to display text). Just wait it out before you panic. I read that if you flash them and remove some part of the bios you can speed that up, but these shouldn't reboot often enough for this to really be an issue. Cards I installed: LSI Logic Controller Card LSI00301 SAS 9207-8i 8Port x2
  4. Yeah. Fantastic work Unraid. The same page that says this specific card is used by unraid says not to used it as well. What a joke.
  5. I hope not. This was straight out of the unraid hardware recommendation. Hardware Compatibility Edit: Looks like you're right. Good documentation.
  6. I currently have the widely cursed Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller. It's been giving me no end of issues with data rebuilds wanting to take years to complete (43k estimated speeds). I usually get impatient and end up losing some data doing reboots to clear up the hangs. I've picked up replacement Supermicro PCI Express x4 Low Profile SAS RAID Controller (AOC-SASLP-MV8) cards. I've searched around and I've seen folks say they are going to do the swaps and what not, but I haven't seen anyone outline the steps to help ensure success without losing data. I know to flash the cards to IT mode, but that's about all I've seen. My question is, what should I be aware of when doing this swap? Are there prerequisite steps I need to complete before I do the deed? Will my array be fine as once I pop the new cards in or are there some configuration changes I need to perform? Thanks for reading. If i've missed topics on this, please link them.
  7. root@server:~# ls -l /boot/config/pools/ total 32 -rw------- 1 root root 510 Apr 10 20:55 cache.cfg root@server:~# cat /boot/config/pools/cache.cfg diskFsType="btrfs" diskComment="" diskWarning="" diskCritical="" diskUUID="413d0ae9-06d1-4a97-80cd-34f008d70811" diskShareEnabled="yes" diskShareFloor="0" diskExport="e" diskFruit="no" diskSecurity="public" diskReadList="" diskWriteList="" diskVolsizelimit="" diskCaseSensitive="auto" diskExportNFS="-" diskExportNFSFsid="10" diskSecurityNFS="public" diskHostListNFS="" diskId="SanDisk_SDSSDH3_512G_202059801545" diskIdSlot="-" diskType="Cache" diskSpindownDelay="-1" diskSpinupGroup="" That's the contents of the directory and file. I figured what you recommended would be the fix, but like you, I'm not sure how that might impact things. I've been using this plugin for many moons and it's only after a move on 3/11 or 3/21 that it stopped cleaning out the cache. I don't recall it ever being an issue before then.
  8. I've found an issue with this plugin, at least on my setup, and i'm not sure where this would be configured. All my shares are in lowercase, no mixed cases, but the mover is failing to actually move anything with this plug-in enabled because it's seemingly doing some sort of .title() function and making the first letter uppercase. Apr 19 04:20:20 server root: mvlogger: Share Name Only: Personal Apr 19 04:20:20 server root: mvlogger: Cache Pool Name: Apr 19 04:20:20 server root: mvlogger: No shareCachePool entry found in config file, defaulting to cache Apr 19 04:20:20 server root: mvlogger: cache Threshold Pct: Apr 19 04:20:20 server root: mvlogger: OVERALL Threshold: 0 Apr 19 04:20:20 server root: mvlogger: Share Path: /mnt/cache/Personal Apr 19 04:20:20 server root: mvlogger: Pool Pct Used: 92 % Apr 19 04:20:20 server root: mvlogger: DFTPCT LIMIT USED FOR SETTING: 0 Apr 19 04:20:20 server root: mvlogger: Threshold Used: 0 Apr 19 04:20:20 server root: mvlogger: Skipfiletypes string: find "/mnt/cache/Personal" -depth Apr 19 04:20:20 server root: mvlogger: Complete Mover Command: find "/mnt/cache/Personal" -depth | /usr/local/sbin/move -d 1 Apr 19 04:20:20 server root: find: '/mnt/cache/Personal': No such file or directory However, the mounted share is called personal. root@server:/mnt/cache# ls -l /mnt/cache | grep -i person drwxrwxrwx 1 nobody users 30 Feb 28 11:17 personal/ If I remove the plug-in, mover works fine, but with it installed, the mover fails to find and move any files. EDIT: I think I found where it's getting confused. The share name is personal, it's configured as such in the UI Screenshot here However on /boot/config/shares it's stored as Personal.cfg. It's one of my older shares, so perhaps the convention was different back then. I think perhaps it should just read what's in /mnt/cache and not from /boot/config/shares, if that's indeed what is happening.
  9. This happened a few weeks ago and in my attempt to resolve it myself I ended up in quite the mess and lost a large amount of data. I'm trying to not have to go through that again. I had a disk go bad and I initiated a replacement. New disk is in, it was writing all the contents as normal. I ignored it for 6 hours. I came back and now we have this: Total size:4 TB Elapsed time:5 minutes Current position:343 GB (8.6 %) Estimated speed:67.5 KB/sec Estimated finish: 635 days, 21 hours, 42 minutes Shouldn't take that long. I looked in the syslog and find: Oct 1 13:18:39 alucard kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Oct 1 13:18:39 alucard kernel: sas: trying to find task 0x00000000e2711c5b Oct 1 13:18:39 alucard kernel: sas: sas_scsi_find_task: aborting task 0x00000000e2711c5b Oct 1 13:18:39 alucard kernel: sas: sas_scsi_find_task: task 0x00000000e2711c5b is aborted Oct 1 13:18:39 alucard kernel: sas: sas_eh_handle_sas_errors: task 0x00000000e2711c5b is aborted Oct 1 13:18:39 alucard kernel: sas: ata20: end_device-10:3: cmd error handler Oct 1 13:18:39 alucard kernel: sas: ata17: end_device-10:0: dev error handler Oct 1 13:18:39 alucard kernel: sas: ata18: end_device-10:1: dev error handler Oct 1 13:18:39 alucard kernel: sas: ata19: end_device-10:2: dev error handler Oct 1 13:18:39 alucard kernel: sas: ata20: end_device-10:3: dev error handler Oct 1 13:18:39 alucard kernel: sas: ata25: end_device-10:4: dev error handler Oct 1 13:18:39 alucard kernel: ata20.00: exception Emask 0x0 SAct 0x200 SErr 0x0 action 0x6 frozen Oct 1 13:18:39 alucard kernel: sas: ata22: end_device-10:5: dev error handler Oct 1 13:18:39 alucard kernel: sas: ata23: end_device-10:6: dev error handler Oct 1 13:18:39 alucard kernel: ata20.00: failed command: READ FPDMA QUEUED Oct 1 13:18:39 alucard kernel: sas: ata24: end_device-10:7: dev error handler Oct 1 13:18:39 alucard kernel: ata20.00: cmd 60/00:00:e0:e2:ca/04:00:27:00:00/40 tag 9 ncq dma 524288 in Oct 1 13:18:39 alucard kernel: res 40/00:00:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Oct 1 13:18:39 alucard kernel: ata20.00: status: { DRDY } Oct 1 13:18:39 alucard kernel: ata20: hard resetting link Oct 1 13:18:39 alucard kernel: sas: sas_form_port: phy3 belongs to port3 already(1)! Oct 1 13:18:41 alucard kernel: drivers/scsi/mvsas/mv_sas.c 1434:mvs_I_T_nexus_reset for device[3]:rc= 0 Oct 1 13:18:41 alucard kernel: ata20.00: configured for UDMA/133 Oct 1 13:18:41 alucard kernel: ata20: EH complete Oct 1 13:18:41 alucard kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 The next line is me, 6 hours later ssh'ing in to check on things. I know I which drive seems to be causing the strife, but I know that if I was to just remove it, it would get flagged as a failed drive and then i'd have 2. I have 2 parity disks, so not so bad, but last time this happened it chained into unraid believing I had 5 failed drives and I ended up with some data loss. Is there anything I can do, short of replacing the controller card to kick this back off? What should the process be? I can cancel the parity check, but I'm not 100% sure if i should pull the drive that's holding things up or not. Suggestions welcomed. Diags attached. Thanks for reading. alucard-diagnostics-20201001-1910.zip
  10. An xfs_repair on this drive, md12 or sds, is about 400 pages of errors, but I can mount the drive manually and all the bits and bobs are there.
  11. This is a weird one on me. I've had a terrible weekend as far as my unraid box goes. Replaced one disk on Friday, during the replacement another seems to go dead. It ends up having the parity rebuild (2 parity drives) and says it'll take something like 400 days to rebuild. I don't have that kind of time. So I get that drive out of there and end up rebuilding with 2 new drives. I know that's not optimal, but the struggles were real. So now, doing an ls -ltr /mnt/user/moives shows nothing after Aug of last year. So I mildly panic. I get to looking and I am getting this while trying to look for a directory I know existed. root@alucard:/mnt/user/movies# ls /mnt/user/movies/The_Dark* /bin/ls: cannot access '/mnt/user/movies/The_Dark*': Input/output error I know that was recently acquired. However if I ls -ld on it directly... root@alucard:/mnt/user/movies# ls -ld /mnt/user/movies/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO/ drwxr-xr-x 1 nobody users 83 Aug 13 06:18 /mnt/user/movies/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO// and root@alucard:/mnt/user/movies# ls -ld /mnt/user/movies/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO.mkv -rw-r--r-- 1 nobody users 2627257403 Aug 11 07:14 /mnt/user/movies/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO/The_Dark_End_Of_The_Street_2020_1080p_WEB-DL_H264_AC3-EVO.mkv So they are there, somewhere, but none of my apps or a wildcard ls can see them. I'm doing disk checks right now, but I'm kinda lost on what to do next. Suggestions? Parity should have rebuilt all the things, there were several billion read errors on one drive, but that's fixed now (not replaced) and the array is green. Any suggestions? alucard-diagnostics-20200822-1931.zip
  12. Sorry if it's poor form to hit a thread this old but I'm seeing this and can't find a file to edit and add pcie_aspm=off. Or is this field in the bios somewhere? I have not yet looked there.
  13. Here is my current build. description: Motherboard product: 870-G45 (MS-7599) vendor: MSI description: CPU version: AMD Phenom II X2 545 Processor size: 800MHz capacity: 3GHz clock: 200MHz description: System Memory size: 4GiB 12 data disks + 1 cache drive 4 of the disks are attached to the board 8 are attached via a SAS/SATA controller (model I can't find in lspci or lshw) This build has served me well for a LONG time but has been tasked in the last few months with running all the apps that another machine used to run remotely (sickbeard, sab, couchpotato and plex). This thing is ready to retire but I find picking a motherboard for it to be really confusing in making sure I get one that's compatible and all that. This machine needs only to run the mentioned apps above. I don't do any vm's and it's not a desktop replacement for me so I just need something that'll hold a bunch of drives and run quietly for the next decade. What I'm looking for is a board/chip suggestion around: Intel-based Onboard nic that works inherently with Unraid (I'm on latest version as of 11/1/16). I'm not terribly concerned with the amount of SATA ports as I think I'm going to get 2 new PCI raid controllers and just run it all from there. Looking at 2x AOC-SAS2LP-MV8 for a total of 16 drives. I have 3x Thermaltake MAX-3543 4x3s currently and will add another one of those if I can find a case to hold that many drives (suggestions accepted for that as well!) Another question: As I currently have these drives in cages and just want to transfer them over to the new rig, do I need to worry about cable placement or position on the adapter cards or is unraid setup now to just bring everything up in the proper order when I slap all those drives into the new board/controller card combo? Thanks for reading.
  14. Thanks for the replies. I reckon I'll convert all my apps to docker containers. Thanks again.