Jump to content

docbillnet

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by docbillnet

  1. It is not. Initially I created all the filesystems with the default settings. Which is xfs for disk1, disk2, and disk3, and btrfs for cache. But then I remembered I wanted to try out zfs, so I changed disk1, disk2, and disk3 to zfs. Since zfs is sort of like the next generation xfs filesystem, I didn't expect any issues leaving the cache with the default file system.
  2. This might be exactly the type of issue I was concerned about if I change the file system for my cache drive. Do you might asking what filesystem you are using? What error do you get when you do: rm -rf /mnt/cache/appdata/plex_broken/* ?
  3. How so? It seems if the cache drive has the same settings as the drive in the arrays, then I would never be able to upload a file into the NAS to have it stuck in the cache... My concern is not if I can upload non-UTF-8 files. My concern is that when I just copy a folder it uploads and gets stuck... Meaning I have to manually fix the issue on the NAS instead of the device where I'm uploading from. It is certainly far less work to reform a cache that is routinely close to empty anyways, than to reformat mostly full hard drives. But my concern is there might be a really good reason why the cache defaults to BTRFS. Maybe zfs and xfs are unsuitable for the cache drive? Otherwise, I would expect unraid would have defaulted to use the same file system for the cache drive to avoid this type of issue.
  4. If I am reformatting a drive, is there any reason not to convert the cache drive to zfs instead? It is not the UTF-8 only that is the problem, it is the inconsistency in options that is causing files to get stuck. Out of 20 TB's worth of files I think it is just 6 or so that appear not to be UTF-8, so it makes sense to convert those files. But I would save a lot of hassle of the uploads to the NAS failed, rather than having files fail to move once they are on the NAS. It is an odd default to make the cache drive btrfs without compression.
  5. Looks like this is variation of the same issue I just reported for: Only in my case it was slightly worse. Because I could copy the files the network share, but then they a stuck on the cache drive. I know some filesystems strictly check the character encoding, while others don't. The files were originally generated by Windows users and copied to ext4 fs via sftp. Then I copied that onto my hard drive using btrfs. The cache drive also uses btrfs, but I have the hdd formatted in zfs. So my guess is all these other filesystems allow the windows encoded filenames without error, but zsh does not. But that is only a guess. In my case I'll resolve the issue by putting the folder with the offending filenames into a tar file. But that is hardly a good general solution.
  6. Here a short way to reproduce: root@Higgs:/mnt/cache# base64 --decode <<+ |tar xfj - QlpoOTFBWSZTWUvfjaYAAGp/pMiAAGRAAf+AOARYwW7t3iABAAAAgAgwALm2IiNqGjQAAAADTIRM p6g0AAAAAABJEp6mQ0HqaAaA0A0Mll1eKiUYggomSSLzAkUyhhnammM2MvjrAwkDgycsHLbf3yrn n1Xea6jJwzCDf2kKAzDbYkIDuCkwESFJRO0E0oWE1aAI0ilgCMjNGKcLSwzEgyEhhOSkPJ7eycJ9 wMZTzaXC/IUdcAmBgpWp+gQIJOsJVCKAcn9k3i4lsxCQfxdyRThQkEvfjaY= + root@Higgs:/mnt/cache# cd /mnt/disk1 root@Higgs:/mnt/disk1# base64 --decode <<+ |tar xfj - QlpoOTFBWSZTWUvfjaYAAGp/pMiAAGRAAf+AOARYwW7t3iABAAAAgAgwALm2IiNqGjQAAAADTIRM p6g0AAAAAABJEp6mQ0HqaAaA0A0Mll1eKiUYggomSSLzAkUyhhnammM2MvjrAwkDgycsHLbf3yrn n1Xea6jJwzCDf2kKAzDbYkIDuCkwESFJRO0E0oWE1aAI0ilgCMjNGKcLSwzEgyEhhOSkPJ7eycJ9 wMZTzaXC/IUdcAmBgpWp+gQIJOsJVCKAcn9k3i4lsxCQfxdyRThQkEvfjaY= + tar: ./BAW Weekly Bookings Reconciliation Report \226 APAC Scheduled.XLS.gpg: Cannot open: Invalid or incomplete multibyte or wide character tar: Exiting with failure status due to previous errors This is simply extracting a uuencode tar file to each of the respective mount points. Assuming /mnt/cache is btrfs filesystem and /mnt/disk1 is zfs, I would expect similar results. The tar file is simply one of the files that was causing me issues truncated to 0 size, since it is only the filename matters.
  7. Sample output going directly to the array, where /mnt/higgs/backups is an nfs mount to my NAS running unraid: # sudo rsync -aPs --links /share/Backups/ /mnt/higgs/backups/ root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2015-03-28-09-05-03.xls.gpg": Input/output error (5) rsync: [generator] recv_generator: failed to stat "/mnt/higgs/backups/briemers2-root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2015-04-07-10-45-55.xls.gpg": Input/output error (5) rsync: [generator] recv_generator: failed to stat "/mnt/higgs/backups/briemers2-root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2015-04-11-09-05-28.xls.gpg": Input/output error (5) rsync: [generator] recv_generator: failed to stat "/mnt/higgs/backups/briemers2-root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/Downloads/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2014-07-26-09-05-53.xls": Input/output error (5) rsync: [generator] recv_generator: failed to stat "/mnt/higgs/backups/briemers2-root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/Downloads/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2014-07-26-09-05-53.xls.gpg": Input/output error (5) rsync: [generator] recv_generator: failed to stat "/mnt/higgs/backups/briemers2-root/var/lib/cronsrev/usrpftp.servicesource.com:20/From RedHat/NA/QA/BAW Weekly Bookings Reconciliation Report \#226 NALA - Schedule.2014-11-15-09-05-30.xls.gpg": Input/output error (5) I will see if I can uuencode the names into a small script, since the translation to post the output changes the character encoding...
  8. I ran into an interesting problem. I have a set of files I can successfully copy to the cache drive, but they cannot the be moved to the array. The issue seems to be when filenames contain unicode characters. I don't know if it is all unicode characters or just some. The difference is the cache is using btrfs, and the array is using zsh. I would normally expect filenames to be stored using UTF8, since that is the defacto Linux standard. But this seems only true for btrfs, not zfs. The impact is if I have a share set to copy to the cache drive, the folder and all the files upload to the cache file but then get stuck. If I login and try manually moving the files I then see the problem is with the character encoding. If I set the files to copy directly to the array. then I simply receive an IO error when attempting to copy the files to the NAS. Normally I would avoid this type of issue in Ubuntu, Fedora, etc, by making sure I had a default local set in LC_ALL like en_CA.UTF-8. When I check in the console, I see there is the environmental variable set LANG=en_US.UTF-8, which should have the same effect, so I don't know why it is not working.
  9. Overall, I'm finding using a cache drive for an array seems to cause more problems than it solves. e.g. It seems the default is to allow the cache to fill up most of the way before the mover even starts running. Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing, the cache drive will still be sitting with 500 GB in it when I start the next upload, even though there was sufficient time for it to have completely clear. That means I get warnings, I'm pretty much destined to ignore and can create issues when transfering large files. For example, today I started a backup of disk image files from old computers. I'm looking and see 62 GB left in the cache, and 400 GB image file transfering. The mover is running, but the disk is filling faster than the mover can move. Do I want to know what happens to my transfer if the cache drive fills up? Yes. But it is not an experiment I wish to try today. Now the silly thing is, I open a console window and run rsync -aP --remove-source-files /mnt/cache/<my share>/ /mnt/disk1/<my share>/. and it runs much faster than my transfer, much faster than the mover... So I'm wondering is there a way to set the mover to have higher priority, so my manual intervention is not neccessary?
  10. Keep in mind that $16 Windows Server license is probably not a legal licence. You try using it a corporate computer, and just see how hard the auditing hits you... $249/lifetime per NAS is a HUGE price. Now try scaling that up by 30% for those of us paying in CAD. I would certainly entertain pirating, using another, NAS software, ... To avoid that cost. But be realistic. The main thing you need that lifetime license for is you think the yearly cost is going to be more, any you simply do not want to risk having a known security issue with your NAS. Certainly, you can continue to load new docker containers. But do you really risk doing so, if your NAS could have a known security exploit? But what are your options? You can build your own NAS. Really use RAID 10, and it will be far more reliable. Or you can use TrueNAS, Terramaster OS, etc, or one of the many other free NAS options available. Or you could just put it all into one BTRFS RAID protected volume. You have lots of options. The question is which if Unraid meets your needs best at a cost you are willing to pay. Terramaster OS is basically a RAID 5+1, RAID 5+6, or solution. Neither of which any data model security expert will tell you to use. TrueNAS supports all the enterprise acceptable RAID levels. Unraid uses a unique parity solution. Of these three I would trust my data to TrueNAS or Unraid. But if I want to maximumize storage I either need to build my own, or use Unraid. With 2x14TB,2x8TB, TrueNAS will give me reliable 22TB. Unraid 30TB. So if I want storage I go with Unraid, if I want speed TrueNAS. The cost of larger size NAS + more disks so I could use TrueNAS costs more that an Unraid license. So either way it come down to money. The cost of my time to maintain a custom system is the most expensive of all options.
  11. Oh. That is certainly a reasonable limitation. I really do not know the scenario where I would plugin a USB device, since I have network access at the same speeds or better... One that comes to mind is if I wanted to migrate a disk from zfs to btrfs or visa versa. I might format an external drive in the desired format, copy the data to that external drive and they copy the raw disk back. But there might be a more efficient way to convert file system formats.
  12. Oh. This is concerning. So this means if I am using something like an F4-423, and filling in the drives and NVMe's, I would be unable to use my USB ports to backup data?
  13. I am just a few days into my 30 day trials, and I have already decided to purchase. But I am thinking if I wait until the trial expires, then that essentially grants me additional time where I can receive free updates. Now reading this, I'm wondering if I should also do the two 15 day trial extensions before purchasing?
  14. It would be really cool, if I could combine drives to create the PARITY drive. For example, suppose I have an array of 2x14TB and 2x8TB. Currently, I would use 14TB for parity and the data array would consist of 14TB,2x8TB. But even though it wastes 2TB of space, there would be an advantage to 2x8TB parity and 2x14TB drive space. The first is the 14TB are the newer drives, and if I have a drive failure it is less painful if it is the parity drive(s). Second, is if I am willing to operate without parity for awhile, I can have two drive slots available for swapping in new drives instead of one drive slot. In my case I have two backup arrays of 4TB,3x8TB (no redundancy). I would waist the 2TB either way, because I cannot backup that space. I being able to combine drives to create the parity disk would help with other disk swap scenarios as well.
×
×
  • Create New...