Jump to content

Benson

Members
  • Content Count

    871
  • Joined

  • Last visited

  • Days Won

    2

Benson last won the day on July 8

Benson had the most liked content!

Community Reputation

72 Good

About Benson

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If disk 3 fine, then at least it have data and you won't total lost on disk 3. If it lost then data need rebuild by others disk. If you have backup, then things could more easy. The importance was other disk end up readable and mountable. Due to I am not sure HPA will cause disk mountable or not. So you may best follow others suggestion. And in general, should keep orignal status.
  2. You means orginal HPA was enable ? Anyway just keep orginal BIOS setting first. If you assume disk3 was in good state, then just perform new config and preserve all setting. Start array and check any abnormal, any unmountable file system, perform uncorrect parity check ..... etc. Remember : Set parity valid, don't do any format disk If anything wrong, you may invalid disk3 again and rebuild ( before this step, all other disk must mountabe and content readable ) and don't write anything to array.
  3. Background : One of my LSI HBA in rack case dead after I insert disk, the reason was air flow serious blocking after all disk populate, that out of my expectation. I try increase fan speed and make some air guide, it have improvement but still hot. Due to need silent and don't want modify things too much, so I haven't add fan anywhere inside. Today I try reverse the air flow direction from back to front. It really impressive, all CPU, add on card, PSU, case in good temperature now, but harddisk have raise ~6C and hot air come out in front now. Anyway I will keep this setting. As most equipment air flow design from front to back, so this will conflict if they places together. Just sharing, welcome have comment.
  4. In terms of speed, no much different of BTRFS vs XFS.
  5. If you have local console/GUI access, then you may delete the network.cfg file for reset network setting. Or post diagnostic result for further check.
  6. The error was about bond1, sometimes change on network setting may cause this error. Reboot again should fix. For the RX drops count, pls compare the ratio to RX packet first, small ratio not a problem.
  7. Cache pool can be single or multiple disk in different RAID level or FS. Just misunderstand you run in dual SSD.
  8. Your cache pool in BTRFS, so I assume it in RAID1. Could you show below by command - btrfs fi show /mnt/cache - btrfs fi usage /mnt/cache - btrfs device stats /dev/sd[x]1 ( Cache disk device path ) But I would like if you can, try single SSD cache pool or downgrade to 6.6.X and check have different or not. long time ago, I try use single SSD in cache pool or dual in RAID0, write speed suck but normal because that SSD really suck. Later also try NVMe, speed better a lot ~100MB - 300MB range or higher if multiple write, but always not consistence. Unraid haven't well perform with it. Now I haven't any SSD but use a 10 disks RAIDO, speed over 1GB/s and quite consistence. For your case, pls also check - SAS disk cache enable or not ( pls search old post ) - Network MTU setting ? I always use standard 1500 at all equipment for best compatible - Turbo-write enable, not set auto
  9. Do you means HBA won't work on PCH 16x slot ( electrical 4x ), thanks
  10. All above may not solve the PLEX problem, as I know people usually set appdata path to cache pool or UD with suitable device. I mainly focus on how to get best performance on sequential R/W, data transfer, network transfer.
  11. This almost same as ask why CPU have cache, HDD have cache, SSD have or haven't cache. Unraid disk array not a high performance storage, cache pool or UD much better and that's why they exist.
  12. I know you said not use cache pool, but I am talking about RAM cache and you have 128GB, if disk array write performance lower then data input, then lot of data will sit in memory and waiting for write to array. Thats why you say job finish but R/W still ongoing. If you set md_write_method AUTO, then you should try the different if set to ON. You should try it, there are no absolute value, too large also no use. Anyway I set 10x current.