neilt0

Members
  • Posts

    1509
  • Joined

  • Last visited

Everything posted by neilt0

  1. I had 3 4TB drives arrive over 3 days and I "hotswapped" them in as they arrived, preclearing them all at once! That was fun.
  2. It does look like there is an issue with long filenames/long paths/long passwords in nzbget under Docker that doesn't occur when unrar is used outside the Docker: http://nzbget.net/forum/viewtopic.php?f=3&t=1489&p=9733#p9732 Anyone have any thoughts on how to fix it? How do I get unrar 5.11 in the nzbget Docker? Or is there a binary I can use? The unrar in nzbget is 5.00 BETA 8, which is out of date. Cheers, Neil.
  3. I think there may be an issue with the sources, but Needo needs to confirm that. Edge is 1, I think. Yup:
  4. Edge is 14.x. I installed it a while ago, though.
  5. I think I may have worked it out: http://nzbget.net/forum/viewtopic.php?f=3&t=1489&p=9704#p9704 I think the nzbget Docker https://registry.hub.docker.com/u/needo/nzbget/dockerfile/ installs unrar 5.00 Beta 8, which is not the latest unrar and I believe it fails with either/or long paths or long filenames. unrar 5.01 doesn't. How do we get the latest unrar (5.17) in to the Docker? ETA: This may be a red herring - see the post on the nzbget forum linked above.
  6. Par2 creation is limited by CPU, so you'll see a big speedup with multicore.
  7. There may be a volume mapping issue with either Docker generally or the nzbget Docker specifically. It can't handle unraring when using "long" paths. nzbget outside Docker has no issue with long paths, but inside Docker it does. Something to do with volume mappings? This works: SHORT.nzb > /mnt/disk6/Movies/SHORT/files_unpacked This fails (it's exactly the same NZB, just renamed): abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz_PW IS mnbvcxzlkjhgfdsapoiu6Ghd94wKZRsztFq7JLFb75lfDo5r1Erk4NSNU4xjXbHo.nzb > /mnt/disk6/Movies/abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz_PW IS mnbvcxzlkjhgfdsapoiu6Ghd94wKZRsztFq7JLFb75lfDo5r1Erk4NSNU4xjXbHo/files_unpacked Obviously, there's an easy workaround by renaming the download, but if you're using a pre-processing script that extracts the password from the filename, that could be broken by this limitation. Unraring manually does work, which suggests it's not an unrar problem, and unrar from nzbget has worked in the past outside of Docker using long paths. Thoughts?
  8. I trust you are using the multicore par2?
  9. FYI, I'm running 6.0 on an N54L. Docker is much better and easier than I thought it'd be. nzbget runs about as fast in a Docker as in "raw" form. It's even faster after formatting the cache drive to BTRFS. However, 6.0 is still Beta, so you may want to wait for the RC. I upgraded from 5.x to 6.x in about 11 minutes, so it's pretty easy to do. You may have to replace some/all of your apps/plugins from 5.x, but it's not too tricky, thanks to the excellent Dockers and plugins. I pimped my N54L out with 8 drives. It's a great box:
  10. If you have at least two spare SATA ports (and enough power from the PSU), this is the way I'd do it: * Preclear both new 6TB drives. * Replace the old parity with one of the new 6TB drives. Add the other drive to the array (fast, once precleared). * Copy the data from all 3 of the 2TB drives to the other 6TB drive. I'd use rsync and run a short test to see if mdcmd set md_write_method 1 is faster. On my setup, it's not. * Run a parity check. * Screenshot the current config and do a "new config". * Remove the 3 2TB drives. * Assign the remaining drives and build parity. This way you will have a backup of your data (the data will still be on the 3x 2TB drives) during the parity build. Before all that, I'd also back up any personal, irreplaceable or important data to an online service like OneDrive or Google Drive.
  11. That's still almost a factor of 10 out if it's 10 hours for 3TB! 10 hours I would do. 90 hours, no chance.
  12. (1 TB) / (190 (MB / second)) = 1.4619883 hours ?!
  13. I removed my ReiserFS cache drive from Server 1 (unassigned). Then I formatted a cache drive in Server 2 as BTRFS. I took the BTRFS cache drive and put it in Server 1. Assigned it as the cache drive and it showed up as unformatted. Stopped the array, changed the cache drive to BTRFS, started the array, and it showed up correctly. I don't know if this is related to the issues mentioned above, but shouldn't unRAID be able to recognise the drive, no matter the FS?
  14. I scanned the options quickly, but is it correct there is not an option to write out the checksums to a separate file -- this will only write to the files as it scans? Sorry, but there is not a chance in hell I'd use this! I'd only scan read-only shares and write the checksums to a separate file or files.
  15. I thought we weren't supposed to be posting in this thread! I didn't check many MP3s, because I don't have many in my personal data, but of those I did check, some were "corrupt", but in fact, I don't think they were. They were minor issues related to reporting incorrect length etc. I checked WAVs, FLACs and MP3s. Zero FLACs were corrupt, a few WAVs had issues, but were also not corrupt -- they were just unusual WAVS belonging to a set of samples that probably had weird stuff embedded that FooBar couldn't handle. My assessment of my tests is that none of the data I checked was corrupted. I haven't been running Beta 8 very long though. Some examples of the Foobar "error" reports: 1 item could not be correctly decoded. List of undecodable items: "\\micro\dimeforscale movie podcast\DFSMC 032C Popeye\TB REC\tb_robin_1.flac" -- That one was already truncated before writing it to the server. 9 items could not be correctly decoded. 103 items decoded with minor problems. List of undecodable items: "\\micro\dimeforscale movie podcast\DFSMC 017 The Room\HH recording\huell-part-1.mp3" . . . "\\micro\dimeforscale movie podcast\DFSMC 003 The Cutting Edge\BB review\80590^RecordScratch.mp3" I don't think these MP3s are damaged. I checked some and they are not truncated, they are just odd formats that Foobar doesn't like. e.g. the call recording is 16kHz, 256kbps and recorded by a Skype call recorder.
  16. Thank you for the update, Jon. That's good news about the existing files, should that be confirmed. I can sleep better now! ETA: Update deleted!
  17. But does that mean the metadata relating to filenames? In a way, that's worse, as you can't "see" the corruption (or can you?) Having filenames swapped over everywhere would be a disaster. When I said metadata, I meant metadata as in small files typically are for things like application data such as Plex's Media Library, etc. Large files as in media content shouldn't be as affected by this. The worst part of this bug is that it's a silent corruption in that there is no identifying it with reiserfsck. OK, thanks. You might want to make that crystal clear in future posts. Some users have reported file pointers/actual ReiserFS metada being corrupted, although we don't know whether that's coincidental: http://lime-technology.com/forum/index.php?topic=35161.msg327479#msg327479 Any news on whether the bug affects existing files on the drive -- i.e. files not being written or overwritten? Cheers, Neil.
  18. But does that mean the metadata relating to filenames? In a way, that's worse, as you can't "see" the corruption (or can you?) Having filenames swapped over everywhere would be a disaster.
  19. Does linux 7z support RAR5? The release version for Windows doesn't. I didn't try the beta, though.
  20. No, because the checking is read only. I'm only checking shares that are made read-only (e.g. /mnt/user/Movies, not /mnt/disk6/Movies).
  21. OK, so all I've been testing so far is my own data, a small subset of the data on my server, but obviously it's the stuff I care about most. I tested about 100GB of FLACs, plus some MP3s and WAVs. All are uncorrupted. There's no quick way of checking if filenames are corrupted/transposed, though. I've also tested 82 RARs (28GB). No corruption: If someone wants to put a little shell script that can run ffmepg recursively on /mnt/user/Movies, I'm up for trying that on the 24TB of Movies on my server! http://lime-technology.com/forum/index.php?topic=35161.msg327534#msg327534 Cheers, Neil.
  22. I found a Windows app that will check a batch of RAR files for integrity: http://www.extractnow.com/usage.php#rclick Slower than doing it natively on the server, but it means I don't have to learn how to write code!
  23. From the command line, this works to test the integrity of any RAR file: unrar t filename.rar Again, a script could traverse directories to check all RAR files on a drive (or more than one even).