Jump to content

mproberts

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by mproberts

  1. Quick question - as my array fills I understand it fills one disk to a point, then the next disk, etc.  As I near the first 80% warning, will the disk at/over 80% continue to fill - or will it stop at a percentage (90%) and start on the next disk?  

     

    IE, will I get an 80% warning on disk 1 below, then UnRaid will start using disk 2 until 80%, then disk 3?  Trying to maximize my 7.5TB free but have a safe buffer.

     

     

    Screenshot 2023-05-08 at 10.36.09 AM.png

  2. I'm using a Dell T710 for Unraid 6.11.5.  This has 8 internal slots for drives and I'm currently using two of those for the Cache drives.  What are my options to move the Cache drives?  I'd like to use all 8 slots for the array to expand my overall storage.

     

    I am using a Dell H310 HBA card in IT Mode - with both channels used for the drive cages.  I also have an unused eSata card installed.

     

    Use the eSata card for an external enclosure to house the SSD drives (create a new pool, or use Unassigned Drives)?

     

    Current Config:

    • Dell T710 II - Two x5690 Xeon CPU’s (12 cores, 3.47 Ghz) 
    • 96GB DDR3 Mulit-bit ECC Ram
    • Dual 1100 watt PS
    • Nvidia Quadro P2000 video card for transcoding (love the T710 for this easy install as opposed to the R710)
    • Dell H310 HBA card – IT Mode
    • (6) Hitachi HUS726060AL5210 6TB SAS Drives (30TB Array)
    • (2) Micron 5100 MTFD - Enterprise 980GB SSD Drives (Cache)
    • Dell / Intel XYT17 / X520-DA2 10GB FH Network Adapter (2 ports) running as bonded, active-backup

     

    Thoughts and advice much appreciated.

     

    Screenshot 2023-04-17 at 9.01.41 AM.png

  3. 1 hour ago, KluthR said:

    Could you mount this same target again with a dofferent name with no spaces in it? it should make no difference, but better safe than sorry.

     

    That did it!

     

    I'd thought of the spaces in the Windows directory earlier, but assumed since the OS (UnRaid & Krusader) saw it and worked to browse it was ok.  Probably a best practice to follow anyway (no spaces) when mixing OS's.

     

    Thank you!

     

    Screenshot 2023-04-03 at 3.29.35 PM.png

    Screenshot 2023-04-03 at 3.29.53 PM.png

  4.  

     

    root@Wilhelmina:~# df -a
    Filesystem                  1K-blocks        Used   Available Use% Mounted on
    rootfs                       49391344     2174116    47217228   5% /
    proc                                0           0           0    - /proc
    sysfs                               0           0           0    - /sys
    tmpfs                           32768        1192       31576   4% /run
    /dev/sda1                    15248800      965472    14283328   7% /boot
    /dev/loop0                          -           -           -    - /lib/firmware
    overlay                      49391344     2174116    47217228   5% /lib/firmware
    /dev/loop1                          -           -           -    - /lib/modules
    overlay                      49391344     2174116    47217228   5% /lib/modules
    hugetlbfs                           0           0           0    - /hugetlbfs
    devtmpfs                         8192           0        8192   0% /dev
    devpts                              0           0           0    - /dev/pts
    tmpfs                        49474148           0    49474148   0% /dev/shm
    fusectl                             0           0           0    - /sys/fs/fuse/connections
    cgroup_root                      8192           0        8192   0% /sys/fs/cgroup
    cpuset                              0           0           0    - /sys/fs/cgroup/cpuset
    cpu                                 0           0           0    - /sys/fs/cgroup/cpu
    cpuacct                             0           0           0    - /sys/fs/cgroup/cpuacct
    blkio                               0           0           0    - /sys/fs/cgroup/blkio
    memory                              0           0           0    - /sys/fs/cgroup/memory
    devices                             0           0           0    - /sys/fs/cgroup/devices
    freezer                             0           0           0    - /sys/fs/cgroup/freezer
    net_cls                             0           0           0    - /sys/fs/cgroup/net_cls
    perf_event                          0           0           0    - /sys/fs/cgroup/perf_event
    net_prio                            0           0           0    - /sys/fs/cgroup/net_prio
    hugetlb                             0           0           0    - /sys/fs/cgroup/hugetlb
    pids                                0           0           0    - /sys/fs/cgroup/pids
    tmpfs                          131072        2240      128832   2% /var/log
    cgroup                              0           0           0    - /sys/fs/cgroup/elogind
    rootfs                       49391344     2174116    47217228   5% /mnt
    tmpfs                            1024           0        1024   0% /mnt/disks
    tmpfs                            1024        1024           0 100% /mnt/remotes
    tmpfs                            1024           0        1024   0% /mnt/addons
    tmpfs                            1024           0        1024   0% /mnt/rootshare
    nfsd                                0           0           0    - /proc/fs/nfs
    nfsd                                0           0           0    - /proc/fs/nfsd
    /dev/md1                   5858435620  4394856084  1463579536  76% /mnt/disk1
    /dev/md2                   5858435620  4394282504  1464153116  76% /mnt/disk2
    /dev/md3                   5858435620  4396368904  1462066716  76% /mnt/disk3
    /dev/md4                   5858435620  4395751364  1462684256  76% /mnt/disk4
    /dev/md5                   5858435620  3561133184  2297302436  61% /mnt/disk5
    /dev/sdb1                  1875382960   467431776  1406625568  25% /mnt/cache
    shfs                      29292178100 21142392040  8149786060  73% /mnt/user0
    shfs                      29292178100 21142392040  8149786060  73% /mnt/user
    /dev/loop2                  104857600    16292992    87988752  16% /var/lib/docker
    /dev/loop2                  104857600    16292992    87988752  16% /var/lib/docker/btrfs
    /dev/loop3                    1048576        4164      926140   1% /etc/libvirt
    nsfs                                0           0           0    - /run/docker/netns/default
    //BACKUP/UnRaid OS Backup 37043960800 20978234480 16065726320  57% /mnt/remotes/BACKUP_UnRaid OS Backup
    nsfs                                0           0           0    - /run/docker/netns/bb81da434bbc
    nsfs                                0           0           0    - /run/docker/netns/65b742e4bef7
    nsfs                                0           0           0    - /run/docker/netns/6a03bf945432
    nsfs                                0           0           0    - /run/docker/netns/22334e7d40a0
    root@Wilhelmina:~# 

  5. Target directory from UnRaid below.  I tried target configs with both:

    /mnt/remotes/BACKUP_UnRaid OS Backup/

    and 

    /mnt/remotes/BACKUP_UnRaid OS Backup/Appdata/

     

    Aside from my flashbackup file and the .DS_Store file (from my Mac touching the directory), there are no other files in the target directory.

    Screenshot 2023-04-03 at 10.22.35 AM.png

  6. Trying to use CA Appdata 2.5 using /mnt/remotes for destination.

     

    I am getting the error: CA_backup.tar: Cannot write: No space left on device.

     

    My remote target is an SMB share properly set up and working (I can browse and write from Plex) and with 16tb free.

     

    Any thoughts?

     

     

    Screenshot 2023-04-03 at 9.11.16 AM.png

    Screenshot 2023-04-03 at 9.13.57 AM.png

    Screenshot 2023-04-03 at 9.16.13 AM.png

  7. Thanks.  Is it a virtual Raid 1 on a Raid 0 formatted/pooled set of drives?  That's where my confusion lies and if this is some sort of virtual Raid 1, not clear how this would provide Raid 1 protection (1 drive failing).  

     

    If my two drives are wholly a Raid 0 pool, how do I end up with a (reported) Raid 1 for metadata and system?

     

     

     

    1185229812_ScreenShot2021-06-13at8_16_10AM.thumb.png.94de05336d1ce07a7d05c154379f54de.png

  8. Quick question - I recently replaced my two SAS drives with SSD for my Cache.  As these are enterprise SSD, I went with Raid 0 (I know, still a risk).  Two 960GB drives in Raid 0.

     

    I followed the process for moving the original Cache data to the array, replacing the original drives with the new drives, configured for Raid 0 via the Balance pulldown, then kicked off the move back to Cache.  All went and works fine.

     

    Question: Why does the BTRFS filesystem for my Cache show Raid 1 for System and Metadata?  I did run the Perform Full Balance after the data was moved back as well.

     

    1563283757_ScreenShot2021-06-12at11_47_07AM.thumb.png.49ad2a136376125dd1247b8f74f27e24.png

     

    1170416091_ScreenShot2021-06-12at11_46_32AM.thumb.png.a0118167a6694f5397b8fcdc22077eb3.png

     

    711527654_ScreenShot2021-06-12at11_47_57AM.thumb.png.85b3e0da1b32dd0e21466723a5553fcc.png

     

  9. I have a recently built Unraid server (see specs below) and want to make a change to the cache drives.  I'd built the cache with two 2tb 7.2k SAS drives as a pool, which is entirely too slow... I'm also finding I don't really need 2tb of cache space.  My primary use for this server is Plex and my 3CX phone system.

     

    I'd like to replace the cache drives with two pooled 1TB SSD drives.  Questions and possible issue I need clarity on;

     

    1. What is the preferred method to replace cache drives with smaller capacity replacements?
    2. The 8 drive slots on my T710 are all occupied - two 2TB SAS cache drives and six 6TB SAS drives (no room/slots/channels to add more SAS/SATA drives).  Since the drives are pooled, I understand I could pull one and rebuild on a new drive, but can this be done on a smaller drive?
    3. If migrating/rebuilding to smaller drives is risky or overly complicated, I assume I can follow other documented processes to replace with (much more expensive) 2TB SSD drives one at a time?

     

    Config:

    • Unraid 6.9.2
    • Dell T710 II w/ two x5690 CPU’s
    • 96GB Ram
    • Dual 1100 watt PS
    • Nvidia Quadro P2000 video card for transcoding (love the T710 for this easy install as opposed to the R710)
    • Dell H310 – IT Mode
    • (6) Hitachi HUS726060AL5210 6TB SAS Drives
    • (2) Hitachi HUS723020ALS640 2TB 6G SAS Drives (Cache)
    • Dell / Intel XYT17 / X520-DA2 10GB FH Network Adapter (not configured)
    • 4 network ports on T710 running as bonded, active-backup, 1gb
    • Dedicated APC 1500va UPS

     

    Thanks all!

  10. Thank you.

     

    Next question; when I have the sfp ports connected, what is the procedure to make active on Unraid?  Then disable the copper interfaces?  Eth0 is my primary interface currently with eth1, 2 and 3 as bonded (bond0) active-backup interfaces.  Eth4 is my dual fiber Nic.  While I can work in CLI, my concern is the order of the process so maybe I can do the changes from the web interface?

    • Obtain connectivity to the switch via eth4
      • Use both fiber ports on card and switch ports?
    • Validate connectivity (how?)
    • Disable the bonded interface?
      • Or keep one as a failover?

    Thanks again.

  11. Let me start my question with a basic 'will it work' scenario, then I expect to have more questions...

     

    Running Unraid 6.9.2 on a Dell T710 with 4 gig interfaces bonded as active-backup.  I also have a X520-DA2 10GB FH Network Adapter (2 ports) installed, currently in down state (fiber cables not connected yet).  Primary usage is Plex.

     

    I have a Unifi US-24-500W switch with two unused 10g sfp+ ports I believe are normally used as uplinks to other switches/infrastructure.  Can I use these to connect from the Unraid server fiber interfaces to the switch sfp+ ports and serve all the other hosts on the switch?  My goal is to have the two 10g links between the Unraid server and the Unifi switch serve all my clients - and potentially do away with my four Unraid 1g links.  This is all on one private network at this point.

     

    Thanks

     

     

×
×
  • Create New...