Jump to content

belorion

Members
  • Posts

    68
  • Joined

  • Last visited

Posts posted by belorion

  1. answering my own question.

     

    no ... i didn't get WOL working

    i *cheated* ... i bought a switchbot ... configured the machine to wake up on any keyboard press ... and i configured the switchbot to hit the ANY key ;-)

    Now I can wake it with a remote app on my phone ... or hitting an API ... which is good enough.  The simpsons were right again ;-)
     

    image.png

  2. Long story short my old motherboard died so I bought a new MB, CPU, RAM, Powersupply (kept Norco 4224) and it all works AOK with the exception of WOL for rmy MSI PRO B550M-VC WiFi ProSeries Motherboard - power is expensive, and reducing my environmental footprint where I can)

     

    I have watched SpaceInvader Ones's Wake up your Unraid a complete Sleep wake guide.

    I have S3 Dynamix plugin. Machine does go to sleep - just doesn't wake using magic packet. Enabled diagnostic logging and it does trigger based upon the criteria I provided - i just disabled it right now so it doesn't sleep at the moment as it's in a remote location)

    I have disabled the ErP in the Bios

    I have enabled PCI-E wakeup

    I have added ethtool -s eth0 wol g (added intiially to the go script then changed to user script plugin)

    confirmed it is set using ethtool eth0 following reboot

    eth0 is my only NIC

    Also enabled wake by keyboard in BIOS - and that wake it's up nicely

     

    I have attempted to WOL it using the same process as the old motherboard (just changing the MAC address), also sing WakeOnLan tool in Linux - with no power on.

     

    It *seems* to be using a realtek network card (from the driver diagnostic dump) R8169

     

    Does anyone know if this is something just isn't possible with the current unraid and network driver stack with this motherboard and NIC? Or perhaps I'm just missing something?

     

    I've attached my server diagnostics

     

    thanks for your time

     

    belorion 🙂

     

     

     

     

     

  3. As the bot is asking to close this ... it isn't done yet ... I've managed to get the system working with reduced disks numbers ... and the 22TB drives on the onboard motherboard SATA controllers. It is extremely slow to copy the content back on 25-35MB/s hence the long delay in posts.

     

    I am now attempting to *gently* add back in the drives 1 at a time as it's a 2 day hit for parity sync / calc each time ...I saw gently as I'm doing the reconstructed write to simulate large load on the system (power draw) and I've placed back one of the drives into the machine but not included in the array and copying it back over the network (trying to stress it) and it's giving me link errors ..but keeps running ... just slowly

     

    Jun 25 18:38:24 ctu kernel: ata3: EH complete
    Jun 25 18:44:50 ctu kernel: ata3.00: exception Emask 0x10 SAct 0x40000000 SErr 0x4890000 action 0xe frozen
    Jun 25 18:44:50 ctu kernel: ata3.00: irq_stat 0x08400040, interface fatal error, connection status changed
    Jun 25 18:44:50 ctu kernel: ata3: SError: { PHYRdyChg 10B8B LinkSeq DevExch }
    Jun 25 18:44:50 ctu kernel: ata3.00: failed command: READ FPDMA QUEUED
    Jun 25 18:44:50 ctu kernel: ata3.00: cmd 60/00:f0:00:fe:d7/04:00:1d:03:00/40 tag 30 ncq dma 524288 in
    Jun 25 18:44:50 ctu kernel:         res 40/00:00:00:fe:d7/00:00:1d:03:00/40 Emask 0x10 (ATA bus error)
    Jun 25 18:44:50 ctu kernel: ata3.00: status: { DRDY }
    Jun 25 18:44:50 ctu kernel: ata3: hard resetting link
    Jun 25 18:44:52 ctu kernel: ata3: SATA link down (SStatus 0 SControl 300)
    Jun 25 18:44:52 ctu kernel: ata3: hard resetting link
    Jun 25 18:44:57 ctu kernel: ata3: link is slow to respond, please be patient (ready=0)
    Jun 25 18:45:02 ctu kernel: ata3: COMRESET failed (errno=-16)
    Jun 25 18:45:02 ctu kernel: ata3: hard resetting link
    Jun 25 18:45:06 ctu kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Jun 25 18:45:06 ctu kernel: ata3.00: configured for UDMA/133
    Jun 25 18:45:06 ctu kernel: ata3: EH complete
    Jun 25 20:42:57 ctu kernel: ata3.00: exception Emask 0x10 SAct 0x3000 SErr 0x4890000 action 0xe frozen
    Jun 25 20:42:57 ctu kernel: ata3.00: irq_stat 0x08400040, interface fatal error, connection status changed
    Jun 25 20:42:57 ctu kernel: ata3: SError: { PHYRdyChg 10B8B LinkSeq DevExch }
    Jun 25 20:42:57 ctu kernel: ata3.00: failed command: READ FPDMA QUEUED
    Jun 25 20:42:57 ctu kernel: ata3.00: cmd 60/00:60:e8:9e:64/04:00:2a:03:00/40 tag 12 ncq dma 524288 in
    Jun 25 20:42:57 ctu kernel:         res 40/00:00:e8:a2:64/00:00:2a:03:00/40 Emask 0x10 (ATA bus error)
    Jun 25 20:42:57 ctu kernel: ata3.00: status: { DRDY }
    Jun 25 20:42:57 ctu kernel: ata3.00: failed command: READ FPDMA QUEUED
    Jun 25 20:42:57 ctu kernel: ata3.00: cmd 60/60:68:e8:a2:64/01:00:2a:03:00/40 tag 13 ncq dma 180224 in
    Jun 25 20:42:57 ctu kernel:         res 40/00:00:e8:a2:64/00:00:2a:03:00/40 Emask 0x10 (ATA bus error)
    Jun 25 20:42:57 ctu kernel: ata3.00: status: { DRDY }
    Jun 25 20:42:57 ctu kernel: ata3: hard resetting link
    Jun 25 20:42:59 ctu kernel: ata3: SATA link down (SStatus 0 SControl 300)
    Jun 25 20:42:59 ctu kernel: ata3: hard resetting link
    Jun 25 20:43:04 ctu kernel: ata3: link is slow to respond, please be patient (ready=0)
    Jun 25 20:43:09 ctu kernel: ata3: COMRESET failed (errno=-16)
    Jun 25 20:43:09 ctu kernel: ata3: hard resetting link
    Jun 25 20:43:13 ctu kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Jun 25 20:43:13 ctu kernel: ata3.00: configured for UDMA/133
    Jun 25 20:43:13 ctu kernel: ata3: EH complete

     

  4. Many different combinations have been attempted.

    I determined that 1 x 22 TB drive was on the onboard SATA controllers and 1 x 22 TB drive was on the LSI controllers. So I moved ALL the drives off the onboard controllers and onto LSI (I only have 16 drives in 24 bays total). That failed in the same way.

     

    I swapped the bays of the 22TB around - ie good one with bad one. Bad one still dropped off even when it was on the "good bay".

     

    I tried all drives off internal SATA controller and reverting to the 6TB that was previously being attempted to upgrade to 22 TB still failed.

     

    I tried putting ONLY the good 22 TB drive (parity) on the onboard  SATA controller worked - when 6 TB was in its original (LSI bay). That took almost 2 days to recompute parity (as all the above attempts rendered parity mangled 😐 ).. The 6tb did have 98 read errors whist reconstructing parity (and failed SMart test due to the number of online hours (1592 days) - this was one of the reasons I was doing this (replacing older drives with bigger). This is my "other" unRaid  (that is made of the left overs of the main unraid ie reusing older drives).

     

    I am now trying to replace the 6TB with the 2nd 22 TB WD in the onboard SATA controller - been going for 1.5 hours and projected to take 2 days more.

     

    I haven't upgrade the BIOS as of yet (my USB sticks weren't recognised so I ordered some more from amazon). Will attempt after the 22 TB is fully accepted into the array.

     

     

    SMART ERROR MESSAGE

    ======================

    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.

    Error 2 [1] occurred at disk power-on lifetime: 38221 hours (1592 days + 13 hours)
      When the command that caused the error occurred, the device was active or idle.

  5. All 3 SAS adapters firmware updated but no change ... it's still paused on attempting to rebuild the array .... I will attempt to update the firmware also tomorrow with a boot disk / USB. I've attached the diagnostic logs

     

    /sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version 20.00.00.00 (2014.09.18)
    Copyright (c) 2008-2014 LSI Corporation. All rights reserved
    
    	Adapter Selected is a LSI SAS: SAS2008(B2)
    
    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    ----------------------------------------------------------------------------
    
    0  SAS2008(B2)     20.00.07.00    14.01.00.09    07.11.00.00     00:01:00:00
    1  SAS2008(B2)     20.00.07.00    14.01.00.09    07.11.00.00     00:02:00:00
    2  SAS2008(B2)     20.00.07.00    14.01.00.09    07.11.00.00     00:03:00:00
    
    	Finished Processing Commands Successfully.
    	Exiting SAS2Flash.

     

    ctu-diagnostics-20230610-2001.zip

  6. thanks for that 🙂
    It's not easy to locate the required files from the broadcom site ...but I think I have them now.

    How do I know if I need the R or IT version?

    using the command ./sas2flash -list -c 0 
    yielded this

    Firmware Product ID            : 0x2713 (IR)

     

    So they are all IR

     

    And should I also be updating the BIOS at the same time? As that is ancient also. Do the firmware and bios need to match?

    I have 3 cards - identically ancient versions

    ./sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version 20.00.00.00 (2014.09.18)
    Copyright (c) 2008-2014 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    ----------------------------------------------------------------------------

    0  SAS2008(B2)     07.00.00.00    07.00.00.03    07.11.00.00     00:01:00:00
    1  SAS2008(B2)     07.00.00.00    07.00.00.03    07.11.00.00     00:02:00:00
    2  SAS2008(B2)     07.00.00.00    07.00.00.03    07.11.00.00     00:03:00:00

     

     

    FIrmware and NVData appear to have been updated ... the BIOS SAID it was updated ...but the listing shows that it did NOT update 😐

     

    ./sas2flash -o -f 2118ir.bin -b x64sas2.rom
    LSI Corporation SAS2 Flash Utility
    Version 20.00.00.00 (2014.09.18)
    Copyright (c) 2008-2014 LSI Corporation. All rights reserved
    
    	Advanced Mode Set
    
    	Adapter Selected is a LSI SAS: SAS2008(B2)
    
    	Executing Operation: Flash Firmware Image
    
    		Firmware Image has a Valid Checksum.
    		Firmware Version 20.00.07.00
    		Firmware Image compatible with Controller.
    
    		Valid NVDATA Image found.
    		NVDATA Version 14.01.00.00
    		Checking for a compatible NVData image...
    
    		NVDATA Device ID and Chip Revision match verified.
    		NVDATA Versions Compatible.
    		Valid Initialization Image verified.
    		Valid BootLoader Image verified.
    
    		Beginning Firmware Download...
    		Firmware Download Successful.
    
    		Verifying Download...
    
    		Firmware Flash Successful.
    
    		Resetting Adapter...
    		Adapter Successfully Reset.
    
    	Executing Operation: Flash BIOS Image
    
    		Validating BIOS Image...
    
    		BIOS Header Signature is Valid
    
    		BIOS Image has a Valid Checksum.
    
    		BIOS PCI Structure Signature Valid.
    
    		BIOS Image Compatible with the SAS Controller.
    
    		Attempting to Flash BIOS Image...
    
    		Verifying Download...
    
    		Flash BIOS Image Successful.
    
    		Updated BIOS Version in BIOS Page 3.
    
    	Finished Processing Commands Successfully.
    	Exiting SAS2Flash.
    root@ctu:/lsi# ./sas2flash -listall
    LSI Corporation SAS2 Flash Utility
    Version 20.00.00.00 (2014.09.18)
    Copyright (c) 2008-2014 LSI Corporation. All rights reserved
    
    	Adapter Selected is a LSI SAS: SAS2008(B2)
    
    Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
    ----------------------------------------------------------------------------
    
    0  SAS2008(B2)     20.00.07.00    14.01.00.09    07.11.00.00     00:01:00:00
    1  SAS2008(B2)     07.00.00.00    07.00.00.03    07.11.00.00     00:02:00:00
    2  SAS2008(B2)     07.00.00.00    07.00.00.03    07.11.00.00     00:03:00:00

     

  7. Unraid 6.11

    upgrading from WD 6TB -> 22 TB drive.
    Stopped host.

    Replaced 6TB WD drive with 22 TB drive in same physical drive bay
    Powered on. Selected new drive and let it start the rebuild.
    Shortly after it said there were READ errors and it couldn't find the drive and then the drive was listed in the unassigned devices.

    Captured the diagnostics atached.
    Powered off.

    moved the 22TB drive to a new bay.

    Now UNRAID has red cross next to the 22 TB drive and saying that it is emulated (the 6TB drive contents - or 5.78TB or so).

    • How do I get my array to reconstruct (rebuild) onto the 22TB drive?
    • I couldn't find anything else so it's currently doing a Read-Check?
    • Is it possible to reconstruct at all?
    • Do I need to construct a new array (new Config with the 22 TB drive added already) - wait the 1.5 days for that to construct parity .. and then copy back the content from the 6TB drive at about 30MB/s 😐 vs 150-200MB/s for the rebuild
       

    Hoping that someone will have something that will save me days of manual copying?

    Thanks for your attention :)

    ctu-diagnostics-20230609-1434.zip

  8. unRAID 6.9.1

    1 parity drive

    I had 2 distinct drives fail over a period of 2 weeks. I rebuilt the first one successfully. And then the second drive failed (unreadable / unmountable) - it's possible that this drive was intermittently failing whilst the drives were being rebuilt.

    Since then all of the shares are still defined within UNRAID, but when accessed by anything they show as 0 files. Within the unRAID GUI it shows the size of the total array is correct and the size of the individual shares is correct.

     

    I'm thinking that there was a point at which there were actually 2 failed drives which then meant that the array was incomplete?? and that's why the shares are now showing as empty.

     

    I made all of the individual drive shares available and they appear to have the correct sizing (except for one of the repaired drives - disk 4) - which is totally empty 😐

    Looking for options.

    It's mostly a media archive so the content isn't CRITICAL.

    I have the old "failing" drives, as well as another unRAID server which has SOME of the content on it.

     

    Options
    1. Delete the array config and start again - presuming that I can use all the existing drives "as-is" and just recreate the shares (as the folders are still present - except for disk 4) and then attempt to copy back the content from the external (failed) drives

    2. copy the content to the /disk 4 share? and see if that will get the rest of the shares to be available again?

    3. something else to try and get the shares available again (open to suggestions)

     

    ctu-diagnostics-20230322-1824.zip

  9. On 11/16/2021 at 1:13 AM, JonathanM said:

    Are you clear on how to follow trurl's instructions? That is by far the fastest way to get it done, assuming you do exactly as he said and not interpret his instructions to something else because you think it's better for some reason.

     

    I'll restate step by step.

    1. stop array

    2. unassign parity drive

    3. start array (may not be necessary, can't remember)

    4. stop array

    5. assign former parity drive as data drive in new slot

    6. start array

    7. format newly assigned (or copied) drive as XFS, make sure it's the only drive showing as unmountable before you hit format, and be sure it says it's using XFS

    8. copy (NOT MOVE) all the data from the largest remaining ReiserFS drive to the newly formatted XFS drive

    9. verify the copy completed successfully, full compare of files if you wish

    10. stop array

    11. select drive that was the source of the last copy and change it to XFS

    12. are there any ReiserFS drives left? If yes, go to step 6. if no, continue

    13. assuming everything went well, you should be left with 12 data drives with XFS content, and the smallest drive should be fully copied and ready to be removed

    14. Power off and physically swap the last copy source drive with a new 18TB drive, be sure to note the serial number

    15. go to tools and set a new config, preserve all

    16. assign newly inserted 18TB drive as parity, rearrange data drives however you want, just don't put a data drive in a parity slot by mistake. BE VERY CAREFUL ABOUT THAT!!!!

    17. start array and build parity

    18. do a correcting parity check to be sure everything is happy in the new config. zero errors is the only acceptable answer.

     

    The copy and verify steps are up to you how to accomplish them. I personally use rsync at the local console. If you insist on using a move instead of a copy action, 1. there is no way to verify the data moved properly 2. it will take roughly twice as long, possibly 3x or longer depending on the state of your ReiserFS file system.

    It's all going along great ... I'm actually processing an entire drive a day - when I was originally expecting it to take a week each : )

     

    So ... I'm thinking ... given I have no parity drive right now ... I'm running with no parity right now .. until the process is almost complete (ie I put in the 18TB drive) ...

     

    Can I just put the new drive in and assign it to be a parity drive WITHOUT creating a new configuration?? Wouldn't that just construct parity with the array layout as it was at that instant?

     

    My (limited?) understanding is that creating a new configuration will have the effect of scrubbing all the config (configured shares, users etc) but maintaining the data on the drives.

     

    Or am I missing something?

     

  10. On 11/15/2021 at 10:44 AM, trurl said:

     

     

    Are you sure you have btrfs? Unraid 5 only had ReiserFS, and btrfs supports much larger volumes than 16TB

     

     

    The title of this thread says you don't care about parity, so why not use the parity disk as an XFS data disk, copy all files from the disk with the most contents, reformat that disk as XFS, copy all files from the remaining disk with the most contents, reformat that disk as XFS, ...

     

    Thanks very much for this "thinking outside the box solution" ... I hadn't even considered "temporarily" repurposing the parity drive like this .. but it totally makes sense .. .and it will speed up the process massively : )

  11. 12 hours ago, JonathanM said:

    Are you clear on how to follow trurl's instructions? That is by far the fastest way to get it done, assuming you do exactly as he said and not interpret his instructions to something else because you think it's better for some reason.

     

    I'll restate step by step.

    1. stop array

    2. unassign parity drive

    3. start array (may not be necessary, can't remember)

    4. stop array

    5. assign former parity drive as data drive in new slot

    6. start array

    7. format newly assigned (or copied) drive as XFS, make sure it's the only drive showing as unmountable before you hit format, and be sure it says it's using XFS

    8. copy (NOT MOVE) all the data from the largest remaining ReiserFS drive to the newly formatted XFS drive

    9. verify the copy completed successfully, full compare of files if you wish

    10. stop array

    11. select drive that was the source of the last copy and change it to XFS

    12. are there any ReiserFS drives left? If yes, go to step 6. if no, continue

    13. assuming everything went well, you should be left with 12 data drives with XFS content, and the smallest drive should be fully copied and ready to be removed

    14. Power off and physically swap the last copy source drive with a new 18TB drive, be sure to note the serial number

    15. go to tools and set a new config, preserve all

    16. assign newly inserted 18TB drive as parity, rearrange data drives however you want, just don't put a data drive in a parity slot by mistake. BE VERY CAREFUL ABOUT THAT!!!!

    17. start array and build parity

    18. do a correcting parity check to be sure everything is happy in the new config. zero errors is the only acceptable answer.

     

    The copy and verify steps are up to you how to accomplish them. I personally use rsync at the local console. If you insist on using a move instead of a copy action, 1. there is no way to verify the data moved properly 2. it will take roughly twice as long, possibly 3x or longer depending on the state of your ReiserFS file system.

    Thanks for the detailed instructions 🙂 I was going to ask a little more after I had finished "syncing" the content between the hosts "one more time" (just to be sure - the only reason I can be carefree about parity).

     

    I think the above will make a massive performance increase  - ie I was looking at something that would take weeks .. and now I'm thinking it will be days (due to being able to copy the files at the full hardware speed rather than Network constrained).

     

     

  12. My current state is:

    • 18 TB parity drive
    • 12 data drives mix of 10 & 12 TB data drives (all formatted as ReiserFS)
    • total array capacity about 122 TB
    • available capacity approx 2 TB
    • all physical slots/ports used on motherboard and drive controllers)
    • parity sync / verification takes approx 40 hours
    • MB is 4 core xeon about 8 years old, 16 GB RAM, no docker containers, just a simple NAS

    The use of BTRFS is from migrating from an older unRAID setup (5.x)

     

    existing constraints:

    • cant do in place upgrade of drives as ReiserFS (max file size limit)
    • replacing an existing drive in ReiserFS format with a larger drive doesn't work as the larger drive can't be formatted larger than the max ReiserFS size (ie the rebuild from parity reconstructs the drive contents as ReiserFS)
    • can't do in place upgrade as I have insufficient available array storage to move all content off any drive)
    • no additional bays/ports etc to add a new drive

     

    I want:

    • to be able to expand the total drive array size
    • I can't just replace the existing ReiserFS drives with bigger drives as the max drive size of ReiserFS is < 16TB or so (I tried that previously - and found a link to a max filesystem somewhere indicating as such)
    • to replace all the 10,12 TB drives with 18TB drives or larger, which means XFS formatted drives (replacing the drives one at a time due to cost and effort)

     

    I do have an additional unRAID host that I can copy off the content of one drive to ... so that it can be put back later

     

    I want to be hit with the minimum  2 day parity hits for any activities eg

    - wiping a drive

    - adding drive

    - reconstructing array

     

    I am not concerned about the parity integrity as I have the content available elsewhere - which is why I thought the New Config may be the fastest option to keep most of my data (minus one drive I can remove and put in a new 18TB drive - and then copy back from the other unRAID host)

     

    does that help? @JonathanM

     

  13. I have an unRAID 6.9.1 instance that is full of ReiserFS drives (as I migrated from unRAID 5) and I have an 18 TB parity drive.

    • I learned the "hard-way" that ReiserFS doesn't support data drives that large (18 TB or larger)
    • I need to convert existing drives to XFS to get more array capacity
    • the array is close to capacity (FULL) perhaps 2-3 TB free on > 100 TB
    •  I have no spare drive slots (all bays and ports are full)
    • i have copied the content from the "DISK" share (legacy unRAID 4/5) to another host using FreeFileSync - so I'm ok that the content is safe (it's another unRAID host)

     

    Parity computation takes about 40  hrs given the present size - so I'm *keen* to NOT to have to recompute it any more times than necessary (ie 2 days for every action makes for slow progress)

     

    My understanding is that if I "forget" the drive - it will result in parity being recomputed ... BEFORE I put in a new drive .. and again it will do the preclear / parity computation when I actually put in the new drive ... that'll be 4 days before i actually start coping the content (which will be another 2 days).

     

    i have content "safe" elsewhere as i said ...

     

    i saw this link

    in which ssd indicated that i could create a new config in from the "tools menu" if i didnt care about having parity protection 100% of the time.

     

    i'm not certain how to use the new config - as i'm *keen* not to lose all my content

     

    ie if I have 12 data drives and 1 parity drive ... how do I do it?

     

    1).

    can I just stop unRAID

    pull out the undesired drive

    put in the new unformatted 18TB drive

    start up unRAID (it will complain drive is missing ... ignore the error as it's intentional)

    create new config and select the preserve current assignments as well as adding the new drive?

    (I would have to also select the existing parity drive as the new parity drive)

    and then starting my copying back ...which would then take 2 days

     

    2). create  a new config, preserve existing assignments but EXCLUDING the drive to be replaced, specifying the parity as before

    recompute the parity (2 days)

    turn off

    removed unwanted drive

    add 18TB drive

    format, add to array yadda yadda  (2 day hit)

    then start copying files (which will take about 2 days)

     

    3). Something else?

     

    Thanks for "listening" : )

     

    (edited due to saying i was using BTRFS when its actually ReiserFS 😐)

  14. I see the PEBKAC now .. not used to the new UI .. there's an option displayed off the page (too many drives) which shows

    unmountable disk present .. and gives option for FORMAT 🙂

     

    To close the loop ..formatting fixed the issue.... and all my user shares returned once the "poison-pill" drive was formatted 🙂
    Now to restore the content.

     

    Thanks for your assistance and patience @JorgeB

    • Like 1
  15. On 3/16/2021 at 1:21 AM, JorgeB said:

    No, it remains valid.

     

    No, you just need to have it in another place since formatting will delete everything.

     

     

    Ok ... I tried that ... and I wasn't able to actually change the filesystem format - it was disabled.

    Then I saw another post that said it was just a matter of stopping the array ... changing the format

    Starting the array

     

    and now it says my disk is unmountable .. and no mounted in the the size and used column of main page

     

    https://forums.unraid.net/topic/39296-how-to-reformat-existing-drive/

     

    Suggestions???

  16. 1 hour ago, JorgeB said:

    After you clear the largest disk you just need to click on it, change filesystem to xfs, format and restore the data.

     

    After I have copied all the content from the largest drive to somewhere appropriate ...

     

    I can then click on the 18 TB data drive 

    change filesystem to XFS 

    format the drive

     

    will that invalidate the parity and force it to be recomputed? 
     

    very keen to avoid that if possible as it takes a few hours short of 2 days to complete the parity process 😬

     

    for the sake of clarity ... do I need to remove all the existing content from the 18 TB drive ... say going into the disk share and deleting everything.

     

    I could see that changing the parity to show no content for that disk

     

    then changing the format to xfs ... and then copying the data back

     

    Given it’s been pointed out already that the parity data effectively has knowledge of the format of the filesystem ... I’d just like to avoid triggering a parity recomp when I click format and change the file system ... even on an “empty” volume

  17. I have 12 SATA ports ... and 11 data disks and 1 parity disk ... so there is no create a new drive and copy across the contents.

    There is insufficient space upon the remaining 10 drives to move the content off the 18 TB drive.

     

    So the only options that I see are:

    • add another drive via USB - which I suspect isn’t viable (I have not found USB anything to be anywhere near reliable enough for a RAID)
    • copy off the content to another server
      • then wipe the 18TB drive and reformat it as XFS and copy back the content

    I have never tried to “remove and reformat” a drive in UNRAID. How do I go about reducing the capacity of my total array?

     

    thanks again for helping @JorgeB

  18. Some progress on getting myself a viable unRAID system.

     

    I tried the option 1 approach - as I indicated - just changing the version in the plg file.

     

    That stated that it worked properly without issues. But it failed to startup.

     

    I had the "Could not find kernel image: linux"

     

    Which I was able to fix by following the instructions here

    ie copy bzroot, bzimage, syslinux and made it bootable.

    That gave a running instance so I ran the "new permissions" tool ...

     

    which took a long time and "seemed" to pass AOK ...

     

    But none of my user shares are present (only the actual "disk" ones).

    I tried to manually create one of the shares ... and I did so .. but it failed to work.

     

    I did notice down the bottom of the unRAID page that it stated

     

    "Array Started - Starting services"

     

    Starting ... vs Started ... in progress.

     

    Then I saw the logs were full of errors for the too large ReisterFS drive

     

    Mar 15 11:56:57 TowerToo emhttpd: error: get_filesystem_status, 6099: Permission denied (13): scandir Permission denied

     

     

    Open to suggestions as to how I can proceed from here?

     

    The obvious option that I can think of is to replace the ReiserFS drive with an XFS one ... but that takes time to get a new drive ( I have ordered one).
    Is it likely that will work? ie .. swapping out the ReiserFS 18 TB drve (with 16TB of limited by ReiserFS content) ... letting it rebuild using Parity Data onto an XFS formatted drive?

     

    And given that the NewPermissions doesn't appear to work .. I suspect that means I need to run them again on the new 18TB drive should it successfully rebuild (the parity is from before the Permissions).

     

    thanks for listening :)

     

     

  19. On 3/10/2021 at 11:57 PM, JorgeB said:

    According to wiki max Reiser FS volume size is 16TiB, so likely why you're running into trouble, best to upgrade to Unraid V6 and convert to XFS.

    I have been attempting to upgrade my unRAID 5.0-rc8a as per the instructions here https://wiki.unraid.net/UnRAID_6/Upgrade_Instructions

    Which pointed me to here

     

     

    which led to the first challenge ...

     

    wget --no-check-certificate https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg

     

    doesn't actually work due to the age of the wget command (doesn't support newer encryption standards)

     

    and hangs ... but as curl is available ... I was able to grab the file

     

    I had a look to see if there were any other wget incompatible URLs

     

    and it seems that it's not possible to download the versions of unRAID linked in the plugin - perhaps because they are all too old? The post was from 2018 and it's 2021

     

    All of these get 403s

     

    pluginURL "https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg"

    zip "https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.5.3-x86_64.zip"

    md5 "https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.5.3-x86_64.md5"

     

    Clutching at straws I tried with CURL but I got the same 403 not authorised as expected 😐

     

    curl --insecure http://slackware.cs.utah.edu/pub/slackware/slackware-13.1/slackware/a/infozip-6.0-i486-1.txz -o infozip-6.0-i486-1.txz

    curl --insecure https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.5.3-x86_64.zip -o unRAIDServer.zip

    curl --insecure https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.5.3-x86_64.md5 -o unRAIDServer.md5

     

    All files have content like this

     

    cat unRAIDServer.zip
    <?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>KTBC2NBJWMNYWCYB</RequestId><HostId>8ouBTGUTSAaYiIi30HqQ/3JaRj9oFrQnynQGVZfmFJxtr68CJ/sHPvL/q//rAZLKYEQy2/rtwlw=</HostId></Error>

     

     

    Any thoughts on how I can proceed from here?

    Can I just drop in the latest version 6.9.1 instead?

    https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg

     

     

     

     

×
×
  • Create New...