Jump to content
We're Hiring! Full Stack Developer ×

trurl

Moderators
  • Posts

    44,093
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. dmacias has a plugin to let you choose. See the last few pages of the NerdPack thread.
  2. Well, sounds like a bug then. Thanks for taking one for the team.
  3. A movie file of 50 billion bits is not unusual. The expected average of MD5 calculations needed before you hit the correct checksum would be one-half that, or 25 billion MD5 calculations to fix that one file. And of course, the time to calculate a single MD5 of a file would be related to its size in some way. Could be waiting a long time.
  4. Since you have linked back to this post from a few other threads, you should provide a link to the source of your information in this post.
  5. Not much help in this particular plugin, but the UI has a global Help toggle in the upper right that will give you help (if it exists) for any page. Obviously whether unofficial addons such as this provide much help is up to the individual developers.
  6. That does help, and addresses how Par2 works. What I am still a little confused about is the amount of overhead for Checksumming (assuming you use both Par2 and something else to checksum) A Checksum hash file will take up ~150 bytes per file hashed a late comer to this thread but I did read the thread I just want to understand in my own words... if i have a 7TB array and it 's mostly full, i would need 700GB free at all times for the checksums ? thank you The 10% number is for par2, and could probably be set lower and still work. Par2 will allow you to reconstruct corrupt or missing data. If you just want to detect corruption but not reconstruct then MD5 or something else will give you that with just a few bytes per file.
  7. and until it does dawn on you, you don't know how to use dockers. This is what trips everyone up.
  8. I could do it a couple of ways. I could easily just filter the results page with a drop down to select 3 days, week, 2 weeks, month, all or a setting to remove those older than a certain time. Or maybe both. I think the first method would work well because even after a year at 24x7x52x120 bytes per line you're only looking at a 1Mb xml file. Both sounds good
  9. I can see possibly wanting to schedule this hourly. Would it be possible to automatically delete results that are older than a day or two?
  10. Can you ping raw.githubusercontent.com from your server? ping raw.githubusercontent.com -c 3 On the Dashboard under System Status, what do you have for flash : log : docker ?
  11. Probably because it would show up when flash corruption was detected.
  12. I was just using the terminal emulator in Krusader to try to run a package installer, but am not having any luck. Yes, I am on UnRAID 6.1.4 I don't use this, but is Krusader part of the pytivo docker, or is it a separate docker? If a separate docker it cannot access anything in the pytivo docker.
  13. How about something along these lines? docker exec PlexMS du -sh --exclude=config --exclude=media
  14. I made another update and now you can just run speedtest-xml. I added the script /usr/sbin/ and added logging to speedtest.php Thanks. Just finished running from command line and the results showed up in the page so looks like it works. Now to create .cron I went ahead and added a settings page for creating cron jobs under Settings/Scheduler/Speetest Settings and moved the Test and Results page to Tools/Speedtest Very nice.
  15. How did your family get along before docker? The dockers I have been running 24/7 for many, many months without this problem are in my sig. But I have had these applications working for years before I ever built my unRAID server. They were running on my PC, saving data either locally or to some other NAS. If I needed to I could go back to that approach while I tried to troubleshoot individual dockers on my unRAID.
  16. See the 1st post in the original preclear script thread.
  17. To some extent I am judging this based on who isn't reporting the problem rather than how many people are viewing the thread. I'm sure many people view threads just to see if they can help. A quick skim back over the whole thread looks like those posting they have the problem and those trying to help are roughly equal. I'm not committed to the view that this isn't a defect, I just haven't seen any evidence that it is. Simplify your setup by starting with a new docker.img and a single docker, and then add things back a little at a time with a period of observation at each step. Maybe start with the docker you need the most, or the one you suspect the most.
  18. I made another update and now you can just run speedtest-xml. I added the script /usr/sbin/ and added logging to speedtest.php Thanks. Just finished running from command line and the results showed up in the page so looks like it works. Now to create .cron
  19. You haven't really given us enough information to reproduce. Most specifically, you haven't given us any details about how you have any of those dockers configured. I suspect your problem is related to the way you have configured one or more of your dockers, and it likely is the same for most in this thread. Not really a defect report so I have merged it back in to this thread where you can give us more details.
  20. OK, I'm sure you're going to tell us soon, but the latest release notes mention a stand-alone script we can cron
  21. Skipping pre-read makes sense if you are re-clearing a disk you already have confidence in.
×
×
  • Create New...