Jump to content

w5lee

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by w5lee

  1. Looks like the modification is no longer needed. Here is my current go file and all is working: root@Tower:/boot/config# cat go #!/bin/bash # Copy and apply udev rules for white label drives #cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ #chmod 644 /etc/udev/rules.d/60-persistent-storage.rules #udevadm control --reload-rules #udevadm trigger --attr-match=subsystem=block # Start the Management Utility /usr/local/sbin/emhttp & here is what my rules.d dir looks like: root@Tower:/etc/udev/rules.d# ls 70-persistent-net.rules 99_persistent_unassigned.rules I only have one white label drive left in the system and it is a 14GB one so newer than the other ones I had. So not sure if the new persistent storage rules discover them properly on their own now or the drive is exposing the serial number. Anyway thanks for remembering the thread and helping me out
  2. Here is my go file: root@Tower:/boot/config# cat go #!/bin/bash # Copy and apply udev rules for white label drives cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-persistent-storage.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block # Start the Management Utility /usr/local/sbin/emhttp &
  3. May I ask for a bit more guidance. I have a file cat 60-whitelabel.rules ACTION=="remove", GOTO="whitelabel_end" ENV{UDEV_DISABLE_whitelabel_RULES_FLAG}=="1", GOTO="whitelabel_end" SUBSYSTEM!="block", GOTO="whitelabel_end" KERNEL!="sd*|sr*|cciss*", GOTO="whitelabel_end" # for partitions import parent information ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*" # SCSI devices KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}" KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL_SHORT}=="00000000", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}-$env{ID_WWN}-part%n" LABEL="whitelabel_end" root@Tower:/boot/config/rules.d# ls 60-persistent-storage.rules 60-whitelabel.rules Can I make things work with these files or do I need to downgrade again to retrieve some info?
  4. Think this is a bingo. I had some white label drives I had to do something to get going if I remember correctly.
  5. Post the output of udevadm info -q property -n /dev/nvme0n1 and udevadm info -q property -n /dev/nvme1n1 From both v6.9 and ... 6.9.2: root@Tower:~# udevadm info -q property -n /dev/nvme0n1 DEVLINKS=/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R402382P DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:02.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk ID_MODEL=Samsung_SSD_970_EVO_Plus_2TB ID_MODEL_ENC=Samsung\x20SSD\x20970\x20EVO\x20Plus\x202TB\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=2B2QEXM7 ID_SERIAL=Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R402382P ID_SERIAL_SHORT=S59CNM0R402382P ID_TYPE=nvme MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=21286137 root@Tower:~# udevadm info -q property -n /dev/nvme1n1 DEVLINKS=/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R410089P DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk ID_MODEL=Samsung_SSD_970_EVO_Plus_2TB ID_MODEL_ENC=Samsung\x20SSD\x20970\x20EVO\x20Plus\x202TB\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=2B2QEXM7 ID_SERIAL=Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R410089P ID_SERIAL_SHORT=S59CNM0R410089P ID_TYPE=nvme MAJOR=259 MINOR=1 SUBSYSTEM=block USEC_INITIALIZED=20525164 6.11.5: root@Tower:~# udevadm info -q property -n /dev/nvme0n1 DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:02.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk DISKSEQ=19 ID_PART_TABLE_TYPE=dos MAJOR=259 MINOR=1 SUBSYSTEM=block USEC_INITIALIZED=54420603 root@Tower:~# udevadm info -q property -n /dev/nvme1n1 DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk DISKSEQ=18 ID_PART_TABLE_TYPE=dos MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=54420424
  6. Hi I have tried up upgrade to 6.11.5 from 6.9.2. When I do my array will not start and I am missing 2 of my 5 cache drives. They are both 2TB nvme ssd drives in pcie slots via adapters. When I downgrade they show back up. Tried a couple of times same result. tower-diagnostics-20221211-1541.zip
  7. I followed the directions to shutdown the array Disconnect the drive I wanted to replace Connect the replacement drive then refresh gui assign new drive to cache pool start array format the drive Problem is now I have an empty cache pool all of my data is gone! I backed up my data so I was able to recreate my VM's and everything but I would like to know what I did wrong so if I do get a failed cache drive I can replace it without loosing my data.
×
×
  • Create New...