Jump to content

Helmonder

Members
  • Posts

    2,818
  • Joined

  • Last visited

Everything posted by Helmonder

  1. I just changed my setup according to the first post, when I click the webui link there is only an empty screen..
  2. What really helps is buy an email domain forwarding account, it will cost you tens of euro per year and makes it possible to relay that email address to every physical emailbox you want... So switching providers or preference does not make a difference anymore.. If you choose your last name (I did) you can also use it for your whole family..
  3. I just copy it on my system ? Will it overwrite the "regular" preclear ? Currently my 8TB preclear is taken between 1 and 2 weeks te complete in 3 passes, that is just a to long a process for me, I need to choose between run less cycles or use the faster preclear...
  4. Which is exactly the reason we preclear.. An error might only occur after several writes.. An unraid system typically stores data in a Write-Once-Read-Many kind of thing.. Mostly large media folders.. This means that a flaky sector can very good go unnoticed.. Preclearing takes that out of the mix.. Largely.. Not completely.. I have precleared a lot of drives (20+) and only one has shown quickly rising smart errors, I returned it 5 days later and got a new drive..
  5. Ad blocker is a good thing... That could be it.. I will not ditch chrome however..
  6. It gives me something to start bit clicking does not do anything..
  7. How long does preclearing the 8TH shingled take for you ? I am now at 97% of my first pre-read and that has allready taken allmost 70 hours.. A 3 cycle preclear on a 6TB (WD RED) drive takes me a week.. An 8 TB should take approx 8 to 9 days in comparison.. but with allmost 3 days for the first pre-read I will never make that..
  8. Loving the preclear plugin.. However.. I do not seem to be able to -start- a preclear with the plugin.. I now start it in screen tru commandline and the gui will show progress.. What could be at fault ?
  9. I am preclearing one at the moment... Process is REALLY slow... I am preclearing a 6TB on another system, that is now doing its second cycle post read and the Seagate is still at the 1st cycle post read.. Both are done on a different system, but the system the Seagate is running on is not used at all.. So I am expecting that not to be the differentiator...
  10. Found it.. its /downloads... I changed it in the settings.json to what works for me.. I was kind of mixed up in the fact that I thought it would be a setting in the docker interface... I could ofcourse still edit the settings.json like always..
  11. I know what it is, just need to know how to make the setting in the docker.
  12. I am succesfully testing the docker for transmission, the one thing I cannot find is how to set the location of the blackhole directory... How would I do that ?
  13. dont use thm... even if someone would use them you could work around it by setting shares in such a way that media stays together and avoiding the issue..
  14. try it without the capital... public instead of Public ..
  15. In a previous version it would not work if you used caps.. Could that be it ?
  16. Working my way thru my shares... tool really works well !
  17. I would not create an auto-delete option.. However if, with the -v switch off, the list only containts the duplicate and not the original (alas: only one version), then it would be very easy for every USER to put an RM before that and create a little script.. That would however then be the users own thing .. Very understandable if you do not want to put this in the tool..
  18. The tool absolutely works but with a big list of files it is quite undoable to work with it (eg: I am to lazy). So I am taking a different route. I seem to have two disks (both 2TB) that contain the majority of the duplicates. I also have an empty 4TB. I will now move the first 2TB to the 4TB, then I will move the 2nd 2TB to the 4TB telling it to "replace when file size is different", that way I should have an unduped combination of the two drives on the 4TB. Whatever remains on the second 2TB are dupes and can be deleted..
  19. Would it be possible to only list the duplicate entry instead of both ? With that it would be extremely easy to an editor to add "RM" in front of that and simply delete the duplicate entries..
  20. I just started it for my Movies share: ./unRAIDFindDuplicates.sh -i Movies -v It throws one error directly after starting: ./unRAIDFindDuplicates.sh: line 153: [!: command not found Tools appears to run after that though and also find results: First thing I notice that it appears to be REALLY fast... IT scanned my moves folder (allmost 7TB) in a minute. It found only a few dupes which I have now deleted. I also notice it finds duplicates between the array disks and the cache drive, since files only briefly reside on the cache drive I figure it might be better to exclude the cache drive ? Now scanning series (10.1 TB). Appears to also run quick (also a lot more dupes ;-) I stopped the scan and restarted rerouting the results to a file to analyse later, really a lot of dupes here..
  21. Great ! My REISERFS - XFS conversion cycle has allmost ended (98% of the latest disk), after that I will start your tool to find my dupes..
  22. Does anyone have cache_Dirs running on unraid 6 beta 6 ? I just moved to v6 and it works great but I -REALLY- miss cache_dirs... Actually it is only now I notice how usefull it really is...
  23. Anyone have vcenter 5.1 and willing to share ?
×
×
  • Create New...