Maenda

Members
  • Posts

    12
  • Joined

  • Last visited

Maenda's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi all, First of all I like to say I like Unraid a lot. I have it in production for quite a time now without any issues. But I need som advice about 2 things. I'm sure it is discussed before, but not able to find the right answer. First of all I have an active pool with 12 disks. They are all formatted with Btrfs filesystem. This array cannot be down for a long time. As the IO load is high on some vm's when transferring lots of files, I was wondering if the BTFRS is a good idea or if it is better to re-format it to XFS and get better IO throughput?? If XFS is better, how can I reformat the disks in a way, the array can keep working and the data is safe? Any good solution for this? Thanks
  2. Did you find a good solution for this in the meantime? I struggle with this too
  3. So best would be to install an array card and get rid of the raid adapter. The Display solution does not work.
  4. Ok, is it correct the PRECLEAR script does recognize them and does not add them to the array when finished?
  5. Sure, but most of the OS'es DO work with it in JBOD or PASS THROUGH mode. Only unRaid does not recognize them.
  6. Today i added 3 new disk to the enclosure, but i am not able to clear them or assign them. They do not show up. An lsscsi does show them though. In Freenas they work perfectly. What is wrong and how can i fix? lsscsi [0:0:0:0] disk SanDisk' Cruzer Fit 1.00 /dev/sda [1:0:0:0] disk ATP ATP IG eUSB SSD 1100 /dev/sdb [6:0:1:0] disk WDC WD2003FYYS-02W0B R001 /dev/sdc [6:0:1:1] disk WDC WD2003FYYS-02W0B R001 /dev/sdd [6:0:1:2] disk WDC WD2003FYYS-02W0B R001 /dev/sde [6:0:1:3] disk WDC WD2003FYYS-02W0B R001 /dev/sdf [6:0:1:4] disk Seagate ST2000DM008-2FR1 R001 /dev/sdg [6:0:1:5] disk Seagate ST2000DM001-9YN1 R001 /dev/sdh [6:0:1:6] disk Seagate ST2000DM001-9YN1 R001 /dev/sdi [6:0:1:7] disk Seagate ST2000DM001-9YN1 R001 /dev/sdj [6:0:3:6] disk KINGSTON SA400S37120G R001 /dev/sdk [6:0:3:7] disk KINGSTON SA400S37120G R001 /dev/sdl [6:0:16:0] process Areca RAID controller R001 - storagepool-diagnostics-20201008-1307.zip
  7. Hi All, Just a question. I have storage machine with an Areca RAID controller. I pass the disks through with a normal PASS TROUGH but i can also set the cache here. What do you recommend to set as cache. I have the options WRITE BACK and READ THROUGH (as i remember well). Or is it better to set it is JBOD disks and what about performance then?
  8. Sorry, well i removed indeed most of the stored items because i want to investigate before we put more on it and it. happens again. After rebooting Unraid the shares where back and the disk started rebuilding. What i'm afraid of, is why the shares where unreachable while the disk was inactive. I really get the feeling the cache disk is the culprit as this one puts the data on that disk
  9. Thanks for the update. Can the issues be caused by the cache disk? What i understand is if a disk is crashing, the array should keep function normally without any issues until it is corrected?
  10. Thanks for the reply. The file is attached ur1-meppel-diagnostics-20200910-1430.zip
  11. You are going to hate me but i already rebooted. haha Does the diagnostics make sense then?
  12. Yesterday one of the disks in our array was marked as disabled. Not sure why as the S.M.A.R.T. report was ok. The disk functioned as emulated but i lost all shares to the Proxmox machines and crashed. Today the system started rebuilding the disk. Why are i'm loosing the shares when it crashes? Is this normal behavior?