Jump to content

_Shorty

Members
  • Posts

    89
  • Joined

  • Last visited

Posts posted by _Shorty

  1. Frankly, I would be more inclined to believe you have exactly zero corrupt files, and are simply one of a few of us that are reporting that the plugin doesn't seem to behave properly for.  Files don't usually go corrupt on people when simply sitting on a hard drive.  The hard drive's electronics work with algorithms designed to detect whether or not a read was successful.  They can detect when a read was faulty and the data was corrupt.  If you are not seeing the drive reporting that there have been read errors, I would not worry about whether or not your data is corrupt.  It likely isn't if no read errors are being reported.  For whatever reason, the plugin doesn't seem to be behaving properly for everyone that has tried it.  Some of us, perhaps a very small minority, have seen reports of corruption from the plugin, but we also have other hashes from other hashing tools on hand that say the files haven't actually changed.  The plugin is saying they have changed, but since the other hashing tools are still reporting the original hashes for those files, it stands to reason that the files have not changed and that something is behaving incorrectly with the plugin.  If the other hash tools were giving different results then we might suspect actual file corruption.  But with other hash tools still agreeing with previously established hashes, that narrows it down to the plugin being the only one misbehaving.

     

    If you're not getting read errors reported from the hard drive, I wouldn't worry about it, and I'd simply uninstall the plugin.  It would be nice to have an automated hashing system in place, but if it doesn't seem to be working properly it isn't of much use.

    • Thanks 1
  2. Cache Directories started misbehaving on me a few months back, chewing CPU like crazy after an unknown amount of time, so I stopped using it.  I'm not sure if this was because I was stuck on an older version of unRAID or not, but at the time I had not yet figured out how to get my machine to behave with the latest version(s) of unRAID.  It is certainly possible that there was some issue with the latest version of the plugin and the older version of unRAID, but which might not exist if unRAID were updated.  I haven't tried it again yet to check if the issue is gone now that I'm updated.  On the occasions where your machine doesn't sleep is is possible Cache Directories is chewing CPU for you, too?

  3. Thanks for your time, guys.  ok, I was just wondering why it wasn't letting me change anything at all on that page, not just MTU.  I'll change that to 1500 and try upgrading again.  1500 or 1514?  I seem to recall 1500 being the norm.  I wonder why the newer version doesn't like the other setting that works ok in the older version.

  4. Yeah, it's not letting me change it in the browser settings page, either.  Seems like that's the default and has worked fine with 6.5.3 all this time.  Where am I going to find that?  One of the rc files?  This is a screenshot of the settings page working as usual with 6.5.3 install that's been humming along fine since 6.5.3's release.  Something change with the ethernet driver after that point?

     

    image.thumb.png.8557cc1401e245654462470fec90e422.png

  5. I would guess so.  While it appears it may work correctly most of the time for most everyone, some of us have run into erroneous corrupt file messages, among other things.  In my case, the corrupt files still show/match the same hashes with other hash utils, so I'm not sure it's worth bothering with anymore, at least for the moment.  Naturally, hard to fix if the dev can't replicate.

  6. When 6.6.0 came out I tried upgrading to it, only after doing so the machine became unreachable via the network.  Downgrading to 6.5.3 returned the machine to a usable state, and it has been running fine ever since.  It was suggested I may have a hardware problem, but given that it runs and has been running just fine with 6.5.3 (and also was running Windows just fine for years before finally being retired from main machine duty, at which point I tried it with unRAID) ever since downgrading from 6.6.0 over a year ago.  While a hardware issue is certainly not out of the question, given it seems to be functioning just fine on 6.5.3 it would seem there is something in 6.6.0+ that for whatever reason doesn't agree out of the box.  I just tried the 6.8.0 release candidate and found the same issue again.  Here are some diagnostics from 6.5.3 just now, and if need be I can move the box (it resides in a place where I cannot hook up a monitor, keyboard, or anything to it) and run the diagnostics from the 6.8.0 RC install.  My guess is I'll likely need to do that anyway to proceed with troubleshooting, but for now here are thetower-diagnostics-20191018-1745.zip 6.5.3 diagnostics.  I'll go shut down the box and move it to where I can otherwise access it locally, as after upgrading it will once again lose network access.  I'll post the diagnostics from the newer version in a few minutes.  Not sure if it will be, but perhaps it will be helpful to have the diags from both versions.

  7. On the subject of defragging, while it does happen, linux machines tend to avoid fragmentation better than Windows boxes did pre-NTFS.  NTFS is better about it than previous filesystems there.  Linux filesystems do a pretty good job of avoiding it to begin with, though it is still possible.  But, typically, it doesn't happen often enough or to a large enough extent to really worry about defragmenting on linux machines.

  8. There is a difference between dedicated network storage and local storage that you are actively using.  You need a certain amount of free workspace on your local machine to ensure it continues to operate properly in all conditions, as it might need some space for temporary files or extra pagefile space at any given time, but you do not have that limitation on a network machine dedicated to storing files.  Filling such drives until they are full is not a concern, and is making things as efficient as possible.  Empty space is wasted space in this instance.  I can't think of any reason for maintaining a large amount of free space on such drives.

     

    As for performance falling off, well, that's how hard drives have always been.  More sectors in outer cylinders than there are in inner cylinders dictate this.  It's (usually) rotating at a constant speed, so more sectors passing the read heads in the outer cylinders translates directly to more speed, and as you work your way in the number of sectors reduces and so does the transfer speed.  Actively going out of your way to avoid that is a bit much.  Why not simply by a 10 TB drive and only use the first half then?  I mean, you're then multiplying the cost of your storage space by two, since you are ignoring half of it, but hey, at least the performance drop off is gone, right?

     

    The unRAID system does a pretty nice job of using up the space you give it.  I wouldn't be so concerned with micromanaging it, as your return on that time investment is rather small.  Both for you thinking about how to manage it, and for the time/energy used to constantly move files around.  Consolidating a few things here and there in order to avoid spinning up more drives can certainly be helpful, and save you having to wait a bit longer each time you access that stuff, but beyond that I wouldn't worry about how things are stored.  Just doesn't seem worth it.

    • Like 1
  9. 9 hours ago, nuhll said:

    Speed, full drives are slower.

    Full drives are slower?  Not really.  The graph shared above is typical of any hard drive.  They're fastest at the start of the disk because there are more sectors per cylinder in the outer cylinders, and then as you move to the inner cylinders there are less sectors per cylinder, so they're a little slower there.  But this has nothing to do with the drives being full.  It just has to do with the location that you're reading/writing at the time.  You could be reading/writing to the inner cylinders even when the drive is completely empty, for example.  You'd still be seeing that relatively slower performance even though the drive is empty.  You're will probably be wasting more time and energy moving files around than you'll save by just leaving the system alone to do its job.  Consolidating some similar/related files can stop you having to spin up multiple drives and lower your time waiting around for drives to spin up, but other than that I don't really see any upside to micromanaging the files.

    • Thanks 1
×
×
  • Create New...