Jump to content

neilt0

Members
  • Posts

    1,509
  • Joined

  • Last visited

Everything posted by neilt0

  1. Perhaps a mod can split out the off-topic discussions in the other thread. On with more data integrity checking -- I mentioned audio and video in other posts. Some of my data is RAR'd. Here's a way to check all your RAR files for errors in one fell swoop: http://superuser.com/questions/359245/how-do-i-check-many-rar-files-for-corruption-at-once ETA: That doesn't appear to work with RAR5 archives. Checking for another solution now...
  2. On the topic of error checking, given that a lot of the files we have on our servers are audio and video (audio was covered earlier with the foobar plugin), would this be an easy way to check video file integrity: http://superuser.com/questions/100288/how-can-i-check-the-integrity-of-a-video-file-avi-mpeg-mp4 We'd just need a script to traverse through all directories, check every file, then parse any error messages out, right?
  3. Thanks for the tip. I ran it on the archive of original audio recorded for my podcast (905 FLAC files, 97.7GB) and it only found one corrupt file. However, that file was already known to be corrupt as it was a failed upload from one of my co-hosts. That's also a good thing, as the verifier correctly identified a known truncated file, suggesting none of those files have been truncated by the bug. This suggest that none of my important source files have been corrupted. However, whether or not they have been jumbled up and have the wrong filename, as reported before is yet to be determined. At some point, I'll pull down all my backup files from OneDrive (some recent files have not been backed up, but most have) and do a byte compare. The files were on a volume (drive) that had about 1 million writes to it since booting with Beta 6, so that's encouraging, I think. That drive does have a lot of other files I care about, so it'll take a while to check the rest.
  4. Nope: This states it's a potential problem: http://lime-technology.com/forum/index.php?topic=35161.msg327143#msg327143 I know other people have reported file corruption, but those reports are vague and are conflating overwriting with corrupting other non-written files. It'd be nice to have that tested, once the LT peeps are done with Beta 9.
  5. And, it's still unanswered as to whether writing new files to a drive corrupts existing files* on the drive. That needs to be tested (by LT) and made clear ASAP! *NOT overwriting files. I don't think it's realistic to expect LT to have these answers. They need to work on getting beta9 tested and released so the bug is no longer there. Per my earlier post: I'd be happy with "We don't know yet", if they don't know. It's a bit ambiguous at this point.
  6. And, it's still unanswered as to whether writing new files to a drive corrupts existing files* on the drive. That needs to be tested (by LT) and made clear ASAP! *NOT overwriting files.
  7. Exactly! By my guess, it is files being written which are susceptible to truncation - I have found one such corruption on my system - and existing files on the drive being written to which are at risk of having the data at the beginning of the file overwritten. OK, well let's distinguish between existing files being overwritten (easy to check, they will be timestamped) and existing files not being overwritten. If the latter are being corrupted, that's a much bigger problem. That needs clarifying. While I think of it, if the latter is in effect, could deleting files on a device potentially cause corruption on the device (drive)? I'm going through all my drives to check the last time they were written to and one has not had new files written to it, but has had files deleted. Do I need to worry about the other files on that drive?
  8. Corruption of 'other' files is definitelymentioned in the original bug report. My guess, but it's only a 'gut' feeling, is that truncation occurs on files as they're written, while garbage at the beginning of files is the corruption of a file not being written. I took "other" files to be referring to other files on the device not being written. Which is much worse. I can deal with checking all the files written since Beta 8. Checking all the other files on the device (drive?) would be impossible.
  9. Uh-oh. That's bad. Checking files written since Beta 8 was installed is fairly straightforward, albeit time-consuming. It'd be nice if you could do some extensive testing (even if it's after Beta 9 is out) so you can verify that corruption of other files on the device happens. If it doesn't, that'd save a lot of time. If it is the case, how can we tell if other files are corrupted? I take it timestamps on those files won't change?
  10. I don't think the official HP BIOS runs all the SATA ports at full speed: http://n40l.wikia.com/wiki/Bios That's the main reason why people upgrade the BIOS. But, if you never use ports 5 and 6 for hard drives, it won't matter.
  11. I can't recommend anyone to flash a BIOS if they don't know what they're doing... But, having said that, the wikia is a good resource. The Microserver is not a new design and has actually been replaced with a new model for a while. My (latest model) N54L is running the custom BIOS.
  12. Also, the containers may be in the .img, but your configs and data for most (all?) container apps will be stored outside the container, so if you lost the img, it's pretty trivial to restore the settings etc. Personally, I'm not going to bother backing up the .img, but I will back up my settings (and info on the volumes used by each container).
  13. The custom BIOS allows you to run SATA ports 5 and 6 at full speed. They are the internal SATA port and the e-SATA port at the back. If you plan on using more than 4 drives, you should upgrade the BIOS: https://www.google.co.uk/search?q=hp+n54l+custom+bios&oq=hp+n54l+custom+bios&aqs=chrome..69i57j0l3.11676j0j4&client=ms-android-motorola&sourceid=chrome-mobile&espv=1&ie=UTF-8
  14. You can do it, I've done it in the past, you can just mount the drive as cache manually, but it was horribly slow (USB 2.0 2.5" drive). Maybe if unRAID supported USB 3.0 (does it support USB 3.0 yet?) then it might be worth a try with a fast flash drive.
  15. I've searched the forum, but can't see this mentioned. If I upgrade from unRAID 5 to 6.0B8, do I need to run the new permissions script (before setting up Docker). I didn't do that on my Server 2 and I'm wondering if I should do it for Server 1 and if it will fix problems like this before they occur? http://lime-technology.com/forum/index.php?topic=34279.msg325331#msg325331 Cheers, Neil.
  16. The tests I have seen that have been done by forum members suggest there is no discernible difference in speed between the different formats. Thanks. I may try xfs at some point, anyhoo.
  17. Which FS will be faster if I'm writing multiple files to it simultaneously (say 5)? A cursory glance says XFS might be a good choice? I don't care about drive pools at the moment. If I do in future, I can always reformat the cache drive.
  18. Can I use xfs on the cache drive? I have 6.0B8 on my second server, but it has no cache drive, so I can't check that. The release notes suggest no, but a post in this thread suggest yes. Cheers, Neil.
  19. Thanks. That's not my post you quoted though! Can I also use the method I added in my "ETA 2" in my actual post though, using the extended docker page and these variables: "Container volume: Host path:" I think I'd be happier using several of those, e.g. /mnt/disk6/Movies mapping to d6Movies rather than having something have access to /mnt. wily and indeed nilly?
  20. OK, I got the latest build running by using Needo's plugin with the EDGE variable set to 1. I may try gfjardim's build again. I didn't run the new permissions script, as I didn't think that was necessary going from 5.0 to 6.0. I didn't see it in the 6.0 readme. If it's there, it needs to be made more clear, I think. Anyhoo, that's all working. However, I don't know if I'm doing something wrong, but the docker doesn't seem to be able to access disks directly -- e.g. putting /mnt/disk3 in a category as the unpack directory -- should that work? It's one of the first questions I had when Docker was first mentioned here and I'm not sure I really got an answer: http://lime-technology.com/forum/index.php?topic=33805.msg312197#msg312197 It's not the end of the world if I can't set up nzbget to unpack to any drive I want in a category rather than the single redirect that's used when you install the plugin, but it's definitely going to be a pain if you can't. ETA: It IS the end of the world if you can only set one drive for "download" and unpack to the same drive only, as that has a huge performance hit, especially in unRAID. ETA 2: I just realised "Container volume: Host path:" may fix that. Will play around with that!
  21. That's exactly right. I'm using my testing server first, with no cache disk, but you don't need to format anything for BTRFS, as that's now a loopback thingy. It was very straightforward, I just followed the upgrade directions, then installed the extended docker page. I'm going to try Needo's nzbget next, as I can't get gfjardim/nzbget to upgrade.
  22. A quick note to say thank you to gfjardim for your work here and also on the extended Docker config plugin. I upgraded from unRAID 5.x to 6.x and was amazed at how easy it was to set up Docker, then install nzbget from the extended Docker plugin. However, I can't seem to be able to update nzbget. I get this error: I'll go through the rest of the thread to see if I missed anything.
×
×
  • Create New...