tillkrueger Posted January 11, 2018 Share Posted January 11, 2018 so since I am pushing my old unRAID system to the edge by trying to run a Windows 10 VM on it (I upgraded the CPU to a Q9550 and am trying to upgrade the RAM from 4GB to the max 8GB if I can figure out how to upgrade the BIOS and hopefully get it to see the other 4GB), the speed of the VM is mostly held back by residing on the cache hard disk of my array. so I ordered a 525GB Crucial SSD with the goal to swap out the 3TB cache drive for the SSD...the cache drive now holds a total of about 300GB, so the SSD is petty big for that, even though only the Windows 10 VM really needs to reside on it, which is 80GB, and the rest I could move to disk 1 of my array. when I went to the physical location of the server today, I was so confused about how to properly add the SSD to unRAID (outside of the array), copy the most important files from the cache drive to the SSD and then swap out the cache drive for the SSD, that I did all the wrong things (please try not to judge me for what I tried): - I first stopped the array - then I plugged in the SSD to the connectors that used to go to drive11 (I have reduced my array down to 10 drives in the past few weeks) - then I tried to format the SSD via UDEV (but the "Format" button was grayed out") - (this is where I went crazy) then I added a second slot to the cache and assigned the SSD to it and started the array again (got an error about too many cache profiles) - stopped the array, unassigned the SSD from the cache pool, deleted the second slot - the SSD now shows as btrfs and having only 2GB available - ssh'd into the /mnt folder and tried to parted the SSD drive in hope of formatting it, but got these errors: warning: Unable to open /mnt/disks/Crucial_CT525MX300SSD1_174819E3876C read-write (Is a directory). /mnt/disks/Crucial_CT525MX300SSD1_174819E3876C has been opened read-only. Error: The device /mnt/disks/Crucial_CT525MX300SSD1_174819E3876C is so small that it cannot possibly store a file system or partition table. Perhaps you selected the wrong device?needless to say that I have no idea what I am doing and need help formatting the drive, copying the most relevant files from the current cache drive to it, and then assigning it as cache, instead of the current 3TB spinner, then re-linking the vdisk of my Windows 10 VM to it (if that is even necessary if the directory structure on it is the same as on the 3TB drive I use at the moment). if my old father called me with something like this, I would ask him "you did *what*?!"...so yeah, I know that was a pretty weak attempt at implementing an SSD into my system and swapping it for the current ache drive. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Easiest way to do the cache replacement (but not the fastest) would be to use this procedure: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923 If your current cache is btrfs the quickest way is doing an online replacement: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=525075 Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 great info/instructions, but since my SSD shows as essentially 100% full right now (even though there are no files on it, when looking at it via FTP or Krusader), I will first need to find a way to wipe it and make it "like new" again...how do I do that? Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 I will first need to find a way to wipe it and make it "like new" again...how do I do that? blkdiscard /dev/sdX This will delete everything, making it like new Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 in which place do I have to be in order to execute this command? if I execute it from inside the SSD, I get "blkdiscard /dev/sdX blkdiscard: cannot open /dev/sdX: No such file or directory" sorry I'm so inept with this, jb, and thanks for your help! Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 You need to replace X with the correct SSD letter, e.g., /dev/sdb Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 and what's the quickest way to determine which of the many sdx's in /dev is the ssd? Link to comment
NewDisplayName Posted January 11, 2018 Share Posted January 11, 2018 Under "Main" Mine looks like this Parity HGST_HDN726040ALE614_K4KGBS0B - 4 TB (sdj) * 83 118 0 Disk 1 TOSHIBA_DT01ACA200_X3NGGG6GS - 2 TB (sdc) * 366 5 0 xfs 2 TB 509 GB 1.49 TB Disk 2 WDC_WD30EFRX-68EUZN0_WD-WCC4N3YNARE3 - 3 TB (sdh) Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 thanks. ok, so it is "sdl". but then I get this: root@unRAID:/# blkdiscard /dev/sdl blkdiscard: /dev/sdl: BLKDISCARD ioctl failed: Remote I/O error Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 do I need to unmount it before executing this command? Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 unmounted it, re-executed the command, same error. Link to comment
NewDisplayName Posted January 11, 2018 Share Posted January 11, 2018 I dont think so. But johnnie might know more. (or u just try it) Worst thing could happen is all gets delted edit: okay then i also dont know, but maybe a restart can fix it? Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Then the SSD is connected to a controller that doesn't support trim, you can use preclear on it but it would be best for performance to move it to another controller, e.g., your onboard controller. Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 amazing, the things you know, jb. I had hoped to not have to go back to the physical location the server is hosted at, but I guess it will be necessary. when you say "for best performance", how much additional performance are we talking about? 10% 30% 50% 100%? since the difference between a hard drive and this SSD will likely be quite noticeable, no matter what, how would I go about pre-clearing it remotely until I get physical access to it again to connect it to one of the on-board ports? Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Difficult to say for sure, depends on the SSD and the usage, for now just wiping the disk should be enough, try this in this order: wipefs -a /dev/sdl1 then wipefs -a /dev/sdl Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 ~ sigh ~ root@unRAID:/# wipefs -a /dev/sdl1 wipefs: error: /dev/sdl1: probing initialization failed: Device or resource busy I tried while mounted and unmounted. does the array need to be stopped? I mean, this SSD is outside of the array, so... Link to comment
NewDisplayName Posted January 11, 2018 Share Posted January 11, 2018 I just wonder whats the problem. Yesterday i added a new ssd to my cache. I let parity run, then i could tick "format" and it formatted it. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Are you sure you using the correct device, or it may have something to do with what you did before and the SSD may still be in use by the cache pool, btrfs sometimes does some strange things, maybe best to post your diags. Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 when u describe it like that, nuhull, I wonder whether I messed up when first assigning the SSD to the cache pool (this is my first time of using a cache drive)...my (probably wrong) assumption was that cache drives are not really part of the array in the sense that they are not striped/protected and work as individual entities...that they are quicker in part because they aren't subject to the RAID5 calculation...so when you say "I let parity" run, I must say that I didn't even pay attention whether there were any parity operations that were starting or happening when I added the SSD, created a second cache slot in the WebGUI, added the SSD to the second slot and then removed it again...not sure exactly what/why I did there, but I have a feeling that I messed with the partition table of the SSD right there. I also didn't check whether/when the "format" button was grayed out or not after I first started the array again. jb, I was looking at the attached screen when determining that it is shown as "sdl"...am I not interpreting this correctly? one way or another, these are the most recent diags. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Yep, SSD is still part of the cache pool: Data,RAID1: Size:229.00GiB, Used:181.82GiB /dev/sde1 229.00GiB /dev/sdl1 229.00GiB Best way forward would be to use the first link I gave you to replace the cache, i.e., move all data on cache to the array, destroy the pool, recreate using only the SSD, move data back. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 P.S. blkdiscard would work on a mounted disk, maybe lucky the SSD is on a controller without trim support or I'm not sure what would've happen to the cache pool, since the pool is in an unsupported state, I'm also afraid of just doing a normal device removal, so still think the above advice is the safest way to go, and don't reboot before it's done or you might get an unmountable pool. Link to comment
tillkrueger Posted January 11, 2018 Author Share Posted January 11, 2018 yes, indeed. when you instruct "Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page"... I clicked the "Move Now" button in the Array Operation tab of the Main page, but it appears as if nothing is happening. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Check that the shares are all set to use cache "yes" and check the log for mover entries. Link to comment
JorgeB Posted January 11, 2018 Share Posted January 11, 2018 Bedtime for me, I'll check back tomorrow if needed. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.