Jump to content

live4soccer7

Members
  • Posts

    410
  • Joined

  • Last visited

Posts posted by live4soccer7

  1. I am not sure. I thought about changing to global setting, but I also have 25 or so other disks a part of the unraid array that I didn't want to lose data for. I would imagine the global setting would persist. I would like to run smart tests on all 48 disks and be able to see the temps. The smart tests for now because the unit is newer and the seller said he would replace and disks that failed the smart test.

  2. I have a scenario where I have to change from the "default" settings on the spin up and spin down settings when you click into a disk. Same things for SMART test controller. I change the settings and then once I go back into their they are set back to say "Default". I have clicked apply in that specific section the settings applies to. I have then tried clicking "done" and also just going back to the main tab. If they do persist, it is only for a few minutes/hours and then they seem to revert back to say "Default".

     

    Any ideas? 

     

    Edit: I'm usually trying to change these when the array has already been started.

  3. 7 minutes ago, trurl said:

     

    No matter how those top level folders or the files and folders in them got created, they are part of user shares.

     

    If you don't already have a user share named Media, and you create a top level folder named Media on a pool or array disk, it is automatically a user share named Media with default settings.

     

    If you already have a user share named Media, and you create a new top level folder named Media on a pool or array disk, it is part of that Media share and has its settings.

    That's sort of what I've grasped from all of this. haha. I was thinking share parameters/ownership or whatever was applied during a file transfer and not strictly based on the share being created and the file just being transferred to that folder.

  4. This:

     

    If you mix user shares and disks when moving/copying files, you can actually lose data, because Linux doesn't realize that the source path and the destination path might actually be the same file, and so tries to overwrite what it is trying to read. This is often referred to as the User Share Copy Bug.

     

    I tried it out of curiosity, pretty sure lost that data. It wasn't a ton though, so not a big deal. I've been searching the whole server for it.

  5. I am buying this, but am having the seller check which backplane is installed. He does not know much about it and can't seem to locate the number of the backplane for some reason. I think he simply has no idea what/where to look.

     

    I pretty much just want to be sure it is one of the SAS2 backplane/expanders and not the BPN-SAS-846A backplane. I'm not terribly familiar with them and never had one in person, so I can't tell from the pics. 

     

    This is the cable that is in the picture: https://store.supermicro.com/supermicro-internal-to-external-minisas-2-ports-low-profile-cascading-85cm-cable-cbl-0352l-lp.html

     

     

    sm_3.jpg

    sm_2.jpg

    sm_1.jpg

  6. Well.... I learned the hard way that if you don't do background and your ssh loses connection then the transfer will fail. That lost about 16 hours. haha.

     

    Is there a way to check progress of the background process? I started a new one. Also, what is the behavior of "background" for copying a file if it already exists in the destination? I see that it doesn't ask you or mention it. I had too many files to manually go through, so I just copied all the original files to the destination again.

  7. 10 minutes ago, JorgeB said:

    User shares are not (yet) prepared to work with multiple pools, but since this is a write once situation it's easy to accomplish, create the pools share as use cache=only and select the first pool, once that pool is full go to the share settings and select to use the next one, and so on, chia just needs to be mapped to /user/plots and it will access all the pools that have that share.

     

    Easy enough, but once you change the pool that is used as the "cache" in the share settings then there is only one pool that has that share unless this persists in the DB for past selections???

  8. 41 minutes ago, JorgeB said:

    If this is for farming chia that's what I would recommend, do multiple raid0 pools with 4 or 5 disks max, they can still be all on the same share and if you lose a disk you only lose a small pool.

     

    Great idea! I just did raid 5, but it is giving me some wonky numbers for free/available space. I'm not sure if it is a bug within unraid or some other issue. I have two identical raid5 setups and one says 92tb available and the other is 96. Regardless, I'm going with what you suggested on the raid0. I just did half the disks with 4 raid0 pools (6 disks a pool to keep numbers even).

     

    When I create a new share, how can I add the "pools" to the share? I just see use cache and then I can select one pool/cache, but not multiple.

  9. 56 minutes ago, JorgeB said:

    Also I would recommend using smaller pools, especially if it's large capacity disks, much easier to deal with, or any operation needed can take days or even weeks, they can still be all using the same share.

    They are 4TB disks. The idea is to maximize space. Each raid 5 I create, I would lose 4tb worth of space.

     

    So, when I go to the pool for the raid and click to change to raid5, it says *see help. I don't see this and there is no additional information in the wiki/manual.

     

    https://wiki.unraid.net/Manual/Storage_Management#Change_Pool_RAID_Levels

     

    In the manual it says: BTRFS supports raid0, raid1, raid10, raid5, and raid6 (but see the section below about raid5/6)

    I can not find this section. 

  10. A few here have been helping me with disk configuration, etc... with my new netapp ds4486. I settled on using the pool feature for the disks and splitting the 48 disks into 2 pools. I had already created a pool with about 12 disks to try and then added in the additional 12 after I had added 10tb of data or so to the pool. Now it is taking FOREVER to "balance" the raid1 with the new disks added. I realized that I actually want to do raid5 to increase the available capacity to me to the max and not be able to lose the entire array with a single disk failure. Ideally, raid0 would be best to give me the most space, but it seems I would lose the entire array if a single disk fails, which I don't want. I don't mind if I lose the data on that disk, but I don't want to lose the entire array/pool of data at once from one disk failure.

     

    To the point. Can I cancel the balance and then immediately switch it to raid5? The balance is at 43% done and it has been about 2 days?

  11. 5 hours ago, JorgeB said:

    There have been some spin up issue with some hardware, disable spin down to test and see if it makes any difference.

     

    I've pretty much confirmed it is literally just the spin-up time of the disks. If I do a FS check on the disk when it is spun down then I will get the errors posted above. The check will fail and the disk will be spun up moments after (automatically). If I do the check again when the disk is spun up then it works perfectly and there are no errors. It is simply a timeout issue while waiting for the disk to spin up.

     

    It takes roughly 3s more to spin up than a disk not on the netapp and that difference is the problem.

    • Like 1
×
×
  • Create New...