Jump to content

Cheesemeister

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by Cheesemeister

  1. if anyone come across this, this is how it was fixed. the openrgb docker is written wrong and needed to be edited. have both open rgb and ich777's patchinstalled. then edit open rgb, go to advanced mode, scroll to the bottom where you see USER ID and Group ID they need changed to this  image.png.28d05661d6dff70492a6b7dc9eccc25e.png.3c05deaa91610568e1f775d23484ad6c.png1521008661_image.png.28d05661d6dff70492a6b7dc9eccc25e(1).png.c31e7954327e0ee991a7c5fdc11e7577.png

     

    Once i did that, everything worked!

     

    • Thanks 2
    • Upvote 1
  2. I know this has been asked before, but nothing seems to be working for me.

     

    Ive installed both openrgb, anlong with ich777's patch. when i run the openrgb docker I get Capture.JPG.034abe8dff6846dfdffed29d9d59489a.JPG

     

    Ive tried running "modprobe 12c-dev", and "modprobe 12c-piix4" doesnt do anything.

     

    when i run "ls /dev/i2*" I get "/dev/i2c-0" and/dev-i2c-1"

    I then passed through these devices to the docker along with 2 variable arguments i found here. still no luck

    1100217545_Capture2.thumb.JPG.8e05597ee1dcedc589b694c249329002.JPG

     

    I feel like im missing something very basic here... if its a kernel patch listed on the openrgb's page, I am unsure how to do that.

     

    Specs of what I am trying to control:

    amd 3960x MSI trx40 creators onboard rgb headers

    trident z neo RAM

    1x thermaltake fan controller

    2x corsair rbg controllers.

     

    thanbks alot for your help. 

  3. 8 hours ago, JorgeB said:

    Possibly existing SSDs were formatted with the old partition layout, post the output of:

     

    fdisk -l /dev/sdX

     

    for both array devices.

    DEV 1:

    root@Hades:~# fdisk -l /dev/sdb
    Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: Samsung SSD 860 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: B1F04D5C-FA89-449C-B031-A699222B1249

     

    Dev 2: 

    root@Hades:~# fdisk -l /dev/sdc
    Disk /dev/sdc: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: Samsung SSD 870 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 6815C5DE-12DD-438E-B62C-AB54D1DA0C98

     

    Disk attempting to add: 

    root@Hades:~# fdisk -l /dev/sdd
    Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: Samsung SSD 870 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x00000000

     

  4. I have an odd issue with one of my unraid servers. over the past year it has been running just fine with 2 samsung 870 QVO sata SSD's running in the array devices slot #1 and #2. recently I started getting sector errors on a drive so I wanted to throw in a parity as part of my original plan. I bought the same 870 QVO drive from amazon and I cannot set as Parity. I get " Disk in parity slot is not biggest."

     

    I read on some forums about these 2 fixes, but they didn't work.

    1. GPT vs. MBR. I tried this, didn't work.

    2. I precleared the new drive and it finished successfully.

     

    I went and checked the sector count on all 3 drives and they are the same?

     

    Not sure what to do, 

     

    If this matters at all here are the specs, because I did read about motherboards being weird sometimes. This system is used as a 2-in-one gaming machine.

     

    Mobo: MSI Creators TRX40

    CPU: Threadripper 3970x

    RAM: 128gb

    GPU's 2x asus strix 3090's

    Cache: 3x 2tb NVMe's

    ARRAY: 2x 4tb 870 evo's (trying to add 3rd)

     

    Thanks alot for the help! I have attached the diagnostics

    hades-diagnostics-20220326-0211.zip

  5. I have tried everything in the FAQ now, including btrfs repair. Which still couldnt even see the drives. When I load them both into a different machine with a linux rescue USB i can only see one drive, its very odd. i can see both in bios, but the OS only will ever see 1. 

     

    I may end up having to wipe this and start over, what would peoples recommendations be on how to setup my cache in the future? I have 3 2tb nvme's. Should they be in btrfs raid 5, or btrfs all independent, or xfs?

     

    Thanks for the help

  6. Hi all,

     

    Last night something happened and I have no clue what. I was watching Plex (Docker) until about midnight and went to bed, 8am someone tried watching something on plex and it was missing. I went to check and I couldn't access the web UI, I checked a few other things and just decided to do a reboot. once it came back up my dockers, libvirt.img and most everything was gone. I noticed that both cache drives were there and In correct order, but it says unmountable: No File System. all of my VM's are on these drives.

     

    Cache drives are 2 2tb NVMe's in raid 1

    possibly I'm thinking they could have became too full overnight because of Deluge?

     

    I have tried this to no avail. if anyone could help it would be much appreciated.

     

    here is a screenshot of my disks in maintenance mode, as you can see both drives still show up without issue ( the unassigned device 2tb nvme was meant to be added to the cache pool to make it raid 5, just hadn't done it yet.) trust me that it hasn't been misplaced with a different 2tb drive as I have pictures of what drives and orders they should be.

    image.thumb.png.ea09062409af324568c73b834685426c.png

    P.S. if i need to post a file or anything, please just tell me how I can do that, Im not a super Linux savvy person, just attempting to be.

×
×
  • Create New...