Jump to content
We're Hiring! Full Stack Developer ×

trurl

Moderators
  • Posts

    44,098
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Just take a look at the documentation provided for each and see what the docker run command is. Here is what is really happening with the unRAID Add Docker page. You are filling in a form with stuff that is used by the docker run command. Once you have filled in the form it is saved to an XML template on your flash so you won't have to enter it again. Community Applications gets docker XML templates from the community docker authors so most of the form is filled in for you. Community Applications also has a feature that tries to figure some of this out from dockers on the hub that don't have an XML template. Try them and see what happens.
  2. So what options are available for us to fix bitrot/whatever? Restore from backup. Or Squid has a checksum plugin that also makes par2 files but I haven't tried that functionality yet.
  3. If you are then try deleting it. It is easy to recreate from docker templates.
  4. 3,4,5 don't necessarily have anything to do with converting depending on your specific situation. I was able to convert all my data disks to XFS without removing any drives or rebuilding parity just by moving things off each drive and onto others with free space, then formatting the empty, then repeating with others. There is a lot in that first thread you linked.
  5. That bit about uninstalling Community Repositories was just referring to an old version of this plugin. The Docker Repositories page is builtin and can't be uninstalled. Nothing needs to be deleted or reinstalled or anything. This plugin just gives you an easier way to find and create dockers. Nothing about your existing dockers needs to change.
  6. I am currently running 6.0 beta 15. I am planning on upgrading to the newest version, just haven't gotten that far yet You can probably assume most plugins only work on 6.1+ at this point. You shouldn't be running a beta anyway.
  7. If the original hash no longer matches the file, then the file has changed since that hash, whether corruption or something else. If the file still plays OK I suspect something has updated the tags in the mp3. I'll let Squid answer the question of which hash his verification would use, but I suspect it would use the one that follows his naming convention; i.e., *.file_extension.md5
  8. Sounds suspiciously like 100Mb ethernet speed, but you ethtool says otherwise. Have you tried it from a different computer?
  9. I have multiple [global] sections in mine, one of which is managed by recycle bin. [global] security = USER guest account = nobody public = yes guest ok = yes map to guest = bad user [global] domain master = yes preferred master = yes os level = 255 [crypt] path = /mnt/crypt valid users = Rick write list = Rick force user = root create mask = 0711 directory mask = 0711 browsable = no guest ok = no #vfs_recycle_start #Recycle bin configuration [global] vfs objects = recycle recycle:repository = %P/.Recycle.Bin/%S recycle:directory_mode = 0777 recycle:keeptree = Yes recycle:touch = Yes recycle:touch_mtime = Yes recycle:versions = Yes recycle:exclude = *.tmp recycle:exclude_dir = .Recycle.Bin #vfs_recycle_end
  10. Unclear from your description. If you are booting from the flash then the boot menu should be the first thing you see. Maybe check your BIOS boot selections.
  11. Didn't affect me for some reason, maybe one of my other plugins. I know something Squid did moved some things in the webUI for us like moving Users off the top tabs and into Settings (where it belongs IMO).
  12. Of course not, but it creates/checks hash files which is what I was asking about.
  13. Tell us more about the export files. I'm sure it was discussed in the bunker thread but it would be helpful to have it here as well. Where are they stored? One hash file per file hashed, one hash file per folder, one hash file per share or drive? Are they compatible with the popular corz checksum application for Windows / Linux?
  14. I followed the bunker thread but never tried bunker because it didn't really fit with what I needed. I create NTFS disks containing specific unRAID shares for offsite backups. I assume the extended attributes hash would not transfer to the NTFS files, and I didn't want to have to manually export these every time I want to make a backup. I am currently using Squid's checksum since it creates separate hash files which get copied to NTFS along with everything else, and it works with shares or disks. I do like some aspects of the UI you have created for this though.
  15. dmacias has a plugin to let you choose. See the last few pages of the NerdPack thread.
  16. Well, sounds like a bug then. Thanks for taking one for the team.
  17. A movie file of 50 billion bits is not unusual. The expected average of MD5 calculations needed before you hit the correct checksum would be one-half that, or 25 billion MD5 calculations to fix that one file. And of course, the time to calculate a single MD5 of a file would be related to its size in some way. Could be waiting a long time.
  18. Since you have linked back to this post from a few other threads, you should provide a link to the source of your information in this post.
  19. Not much help in this particular plugin, but the UI has a global Help toggle in the upper right that will give you help (if it exists) for any page. Obviously whether unofficial addons such as this provide much help is up to the individual developers.
  20. That does help, and addresses how Par2 works. What I am still a little confused about is the amount of overhead for Checksumming (assuming you use both Par2 and something else to checksum) A Checksum hash file will take up ~150 bytes per file hashed a late comer to this thread but I did read the thread I just want to understand in my own words... if i have a 7TB array and it 's mostly full, i would need 700GB free at all times for the checksums ? thank you The 10% number is for par2, and could probably be set lower and still work. Par2 will allow you to reconstruct corrupt or missing data. If you just want to detect corruption but not reconstruct then MD5 or something else will give you that with just a few bytes per file.
  21. and until it does dawn on you, you don't know how to use dockers. This is what trips everyone up.
  22. I could do it a couple of ways. I could easily just filter the results page with a drop down to select 3 days, week, 2 weeks, month, all or a setting to remove those older than a certain time. Or maybe both. I think the first method would work well because even after a year at 24x7x52x120 bytes per line you're only looking at a 1Mb xml file. Both sounds good
×
×
  • Create New...