myMain Questions


Recommended Posts

There's NO reason to limit drives to 90%.    Writes to the drives DO get slow as they get very full ... I assume the file system is "thinking" about just how to allocate the data -- but it has NO impact on reads, so for data that's basically write once and leave forever (like my media collection), there's no reason to limit how full you get the disk.

 

If I could have filled it to the last byte, I'd have done so  :)

 

But I certainly agree that if you fill the drives to the last GB, you're doing VERY well !!    All of my "Full" drives have under 1GB ... and I certainly consider them indeed "Full".  I plan to fill them all up to the point where there's not enough space for another DVD ... but whether they get under 1GB is questionable.

 

Link to comment

Is there a liklihood MyMain will becoime an independant plugin that can work with the standard WebGUI and not rely on unMENU ?

 

I copied this question to the main MyMain thread [ http://lime-technology.com/forum/index.php?topic=32880.msg302304#msg302304 ] ... I assume bjp will respond there (I don't know whether he plans to do this or not -- personally, I'm happy with it in UnMenu, since I also need the UPS support and the disk testing it provides).

 

 

Link to comment

There's NO reason to limit drives to 90%.

There is if like me you are adding files to subfolders and you don't want those subfolders split between drives.  Purely a cosmetic reason but a reason none the less.  If I could afford the drives I would probably keep mine at about 80% but 90-95% depending on the drive size is my compromise.  I try not to fill them up beyond 185GB free so that I have some copy room.  But I move a subfolder off if I get below 185GB after a copy completes.
Link to comment

There's NO reason to limit drives to 90%.

There is if like me you are adding files to subfolders and you don't want those subfolders split between drives.  Purely a cosmetic reason but a reason none the less.  If I could afford the drives I would probably keep mine at about 80% but 90-95% depending on the drive size is my compromise.  I try not to fill them up beyond 185GB free so that I have some copy room.  But I move a subfolder off if I get below 185GB after a copy completes.

 

I copy my own files to the array, not letting user shares take on that responsibility. The copy speed to the protected array is so much better than it was when the cache drive feature was introduced. I like to think that Tom was actually inspired by my post about using what I called a Staging disk share, that I mounted separately. I used to copy files there and had an unmenu plugin that would allow me to move files to different drives over night (gave me a lot of control over what went where). But with write speeds tripling since then, I usually just copy directly to the array.

 

You can use user shares for reading but do your own copying to the physical disks. They still show up in the user shares.

Link to comment

There's NO reason to limit drives to 90%.

There is if like me you are adding files to subfolders and you don't want those subfolders split between drives.  Purely a cosmetic reason but a reason none the less.  If I could afford the drives I would probably keep mine at about 80% but 90-95% depending on the drive size is my compromise.  I try not to fill them up beyond 185GB free so that I have some copy room.  But I move a subfolder off if I get below 185GB after a copy completes.

 

I copy my own files to the array, not letting user shares take on that responsibility. The copy speed to the protected array is so much better than it was when the cache drive feature was introduced. I like to think that Tom was actually inspired by my post about using what I called a Staging disk share, that I mounted separately. I used to copy files there and had an unmenu plugin that would allow me to move files to different drives over night (gave me a lot of control over what went where). But with write speeds tripling since then, I usually just copy directly to the array.

 

You can use user shares for reading but do your own copying to the physical disks. They still show up in the user shares.

 

One more thing - the 90% full recommendation seems to be based on being able to defragment the drive. Without at least 10% available, some defrag utilities don't work or work very inefficiently. I get that. But for me, the drives are almost WORM, so defraging is not an issue.

 

One use for a GB or two on a disk is to store some PAR2 blocks. But you need to employ some special procedures to do an actual recovery if the disk is near full. Basically you can copy some of your large files to another disk  You can tell quickpar about these files so it has all the blocks it needs to do a recovery.

 

Link to comment

I copy my own files to the array, not letting user shares take on that responsibility. The copy speed to the protected array is so much better than it was when the cache drive feature was introduced. I like to think that Tom was actually inspired by my post about using what I called a Staging disk share, that I mounted separately. I used to copy files there and had an unmenu plugin that would allow me to move files to different drives over night (gave me a lot of control over what went where). But with write speeds tripling since then, I usually just copy directly to the array.

 

You can use user shares for reading but do your own copying to the physical disks. They still show up in the user shares.

That's exactly how I use my array - write to disk share and read from user share.  But currently running series are always showing new episodes and if I run out of space I have to split the directory or copy it somewhere else first.  So I leave my 185GB of space which is large enough for a whole season of a single series possibly more.  If I only had series recordings of old shows I would fill the drive up as completely as I could.
Link to comment
  • 3 weeks later...

I'm using the "Fill-Up" method (with a cache drive) and I have the min space set to 20GB (20000000) for my share.  I also have split-level set to nothing (field is empty, so I assume the value is 0?)

 

For a while now, disk1 has been showing as "FULL" and everything was being written to disk2, which is what I want.  I like it this way so that a minimum # of disks are spinning when doing reads.

 

Now today, suddenly, I had mover errors overnight where it said I was out of disk space.  Sure enough, MyMain was now showing disk1 as "100%" instead of "FULL", and it only had 1MB of space left (!!).

 

I'm not sure exactly how much free space was left on disk1 before it suddenly went from FULL to 100%....my question is, why did this happen? What triggers MyMain to show a disk as FULL vs 100% (or whatever the fill percentage is).  I haven't changed the "Fill-Up" free space value, so I'm confused why unRaid suddenly decided to start writing to disk1 again.

 

I realize this isn't a problem with MyMain, but it's the only place where I saw the change from FULL to 100%, so something changed "under the hood". 

 

Thanks in advance for your help!

Link to comment

I'm using the "Fill-Up" method (with a cache drive) and I have the min space set to 20GB (20000000) for my share.  I also have split-level set to nothing (field is empty, so I assume the value is 0?)

 

For a while now, disk1 has been showing as "FULL" and everything was being written to disk2, which is what I want.  I like it this way so that a minimum # of disks are spinning when doing reads.

 

Now today, suddenly, I had mover errors overnight where it said I was out of disk space.  Sure enough, MyMain was now showing disk1 as "100%" instead of "FULL", and it only had 1MB of space left (!!).

 

I'm not sure exactly how much free space was left on disk1 before it suddenly went from FULL to 100%....my question is, why did this happen? What triggers MyMain to show a disk as FULL vs 100% (or whatever the fill percentage is).  I haven't changed the "Fill-Up" free space value, so I'm confused why unRaid suddenly decided to start writing to disk1 again.

 

I realize this isn't a problem with MyMain, but it's the only place where I saw the change from FULL to 100%, so something changed "under the hood". 

 

Thanks in advance for your help!

 

As I think you understand, myMain's full vs 100% is arbitrary. It is totally independent of anything related to user shares. I am surprised it would have changed from Full to 100% without any deleted files. Are you sure you didn't delete something or overwrite something with a smaller file?

 

I will look up the logic it uses so I can let you know. 

Link to comment

As I think you understand, myMain's full vs 100% is arbitrary. It is totally independent of anything related to user shares. I am surprised it would have changed from Full to 100% without any deleted files. Are you sure you didn't delete something or overwrite something with a smaller file?

 

I will look up the logic it uses so I can let you know.

Thank you! Yes, I do understand that. It's certainly possible I did delete something and then it was overwritten with a smaller file.  However I'm surprised it was still showing 100% when it only had 1MB or so left available.

 

I thought maybe there was some flag that unRaid set that said a disk was "full" and was no longer eligible for writing, and MyMain was using that determination to show FULL.  So I thought the switch to 100% might have been a symptom of an underlying cause in UnRaid, because it appeared to coincide with when unRaid started writing to disk1 again when it had previously been writing to disk2, which is what I wanted/expected.

 

I guess I'll just have to switch to high-water.  Ah well.

Link to comment

As I think you understand, myMain's full vs 100% is arbitrary. It is totally independent of anything related to user shares. I am surprised it would have changed from Full to 100% without any deleted files. Are you sure you didn't delete something or overwrite something with a smaller file?

 

I will look up the logic it uses so I can let you know.

Thank you! Yes, I do understand that. It's certainly possible I did delete something and then it was overwritten with a smaller file.  However I'm surprised it was still showing 100% when it only had 1MB or so left available.

 

I thought maybe there was some flag that unRaid set that said a disk was "full" and was no longer eligible for writing, and MyMain was using that determination to show FULL.  So I thought the switch to 100% might have been a symptom of an underlying cause in UnRaid, because it appeared to coincide with when unRaid started writing to disk1 again when it had previously been writing to disk2, which is what I wanted/expected.

 

I guess I'll just have to switch to high-water.  Ah well.

 

Looks like 1.5 million bytes free is the key number. If you are under that - it says you're full!

Link to comment

Just a comment about the apparent HPA's, the syslog will generally show very clearly if an HPA is found on a drive when it is first set up.  At least the ATA drivers would, I'm not sure about the newer SAS modules like mvsas, mpt2sas.

 

That 32KB is actually 32MB (those numbers are in KB), still a trivial amount on so large a drive.

Link to comment
  • 2 weeks later...

... and if you ever need an extra 64K of storage in your array, you could always "fix" your 2 drives and get back that wasted 32k in each of them  8) 8)

 

Just a quick follow-up to this discussion. I finally got around to taking my array offline to fix the wdidle issues and was adding a couple of new drives to the array - one of them an external drive from a WD cage. I tried using HDAT2 on it and it reports there is no HPA on the disk. I also pulled one of the 4TB disks I had previously assigned to the array (but emptied and removed from array first) and HDAT2 reports on HPA on it either.

 

MyMain still reports the issues with the disks though (I tested the two that show out of the array).

 

I can still hide the warning, but it's strange that HPA does not seem to be the issue (assuming HDAT2 is the best tool to verify).

MyMain.png.1c086dcfb19904ade90a35636c5240c6.png

Link to comment

Very interesting -- did you try simply using the SetMax function in HDAT2 ??

 

... Not sure how the size could be lowered without an HPA, but SetMax should always do what it's name says.    It's definitely strange that these drives are smaller than your other 4TB drives !!

 

Link to comment

SetMax did not show up as an option. I was able to browse hidden partitions, but it showed none.

 

I also tried the hdparm -N /dev/sdX but it reported an error and wouldn't even provide results.

 

I sort of did this all quickly last night before going to bed so didn't record all the relevant details but will try and redo. I am travelling tomorrow for a week and wanted to get the new drives in before then to start preclear so was more focused on that, but since two of the drives were being reported smaller I wanted to try the hdparm and hdat2 commands.

 

I will try and redo later today and post more details.

 

 

Link to comment
  • 5 weeks later...

myMain has a list of common sizes of normal commercial disks. It compares the size of the disk with that size with one of those values. If it doesn't find an exact match it suspects that the motherboard may have installed an HPA on the disk. HPAs are not the end of the world, but for a number of reasons motherboards that add HPAs to disks can be dangerous to your array.

 

As disks get bigger we have had to add entries to the list of normal sizes. The most current new entry is for 4T drives, and the value is 3,907,018,532.

 

Just above the drive table, to the right side, there is a prompt that says "Select View". Click on the view called "Detail." Find the 4T drive in the list that that is reporting an possible HPA issue, and look at the value in the "Size (k)" column. Tell me what number you see there. It should match the value above.

 

I have also noticed the "GPT warning" message. It is not causing any harm. Need to investigate that further.

 

I'm getting the HPA warning in mymain with my 4TB disk.  The size IS listed as 3,907,018,532

root@Tower:~# hdparm -N /dev/sda

/dev/sda:
max sectors   = 7814037168/7814037168, HPA is disabled

 

I"m currently upgrading parity...  should that have anything to do with it?

 

Jim

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.