Jump to content
We're Hiring! Full Stack Developer ×

itimpi

Moderators
  • Posts

    20,238
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. Moved on, i got 20 drives for free if 2 fail Meh, once i get done with the others i will re-try it. Thanks for the help. Am I reading that correctly? You signed a deal which got you 20 drives for free if two of your batch failed? I interpreted it as meaning he got the 20 drives for free so was not too worried if 2 failed. I wonder which is the correct interpretation?
  2. You may find the unRAIDFindDuplicates.sh script I wrote some time ago to be of use. It can very quickly find duplicates based on filenames and timestamps, and also has the option to do (much slower) binary compares. I regularly use it after moving files around just to check there are no duplicates left behind on different disks.
  3. One thing to note is that I am reasonably sure that if you use the mv command then files and folders remain on the same disks they are currently located on (I think that under the covers mv will just do a rename). This will only be an issue if you thought they might be redistributed by such an action.
  4. Two step because: 1 - I don't want everyone randomly installing any container off of dockerHub because I (and the community here at large) do not have the ability to support why doesn't it work, why won't it install, why doesn't it always create a template correctly. Purely an advanced feature 2- To force the user to pick and choose from a community supported app before going to dockerHub at large, and to make it obvious that that's what happening. If it wasn't a two step procedure, then the following could happen: Search for couch potato. The community apps appear and also the dockerHub results appear. ok. There's CP from linuxserver. I think I'll install that. User chooses the lsio version from dockerHub instead of from the community apps. Now there's going to be a whack of permission issues, etc because the environment variables in the template are going to be missing. I don't see this operation changing unless I get more requests, because I just don't want to have to deal with the potential issues that could result in the forums. I was not saying it should not be a two step operation! Currently it is 3 step operation:Enable option to retrieve from dockerHub Settings->CA->General Search for an option in CA Get the additional candidates by pressing the button on the search results to get more from dockerHub I was just thinking of eliminating the first one to leave it as a 2-step process. However if you want to leave it as a 3-step operation then I am personally quite happy as I have done the fist step in my environment leaving only the last two when looking for any specific docker. In the case I mentioned it was for ownCloud and it seemed sensible to use the one that is being maintained by the ownCloud development team rather than relying on an unRAID specific version. I do agree that going to a 1-step process that did not present the unRAID ones first would be a mistake.
  5. I notice that if you want CA to search dockerHub you have to both enable that option in the Settings, and also press a button from the initial search results. Is there a reason that it is a two stage process or is it historical? The reason I was asking was that earlier today I could not initially determine why my brother could not see the official ownCloud docker on his unRAID system whereas I had no problem seeing it. Perhaps the Settings option which makes the button appear to search dockerHub appear is now superfluous?
  6. i would be surprised if that was enough RAM for cache-dirs to be effective with that number of files.
  7. No. It just means that if you say a share is not to be protected by the hashing/verification functionality then it makes no difference whether it is on one disk or multiple disks.
  8. Swapped the ports for 2 drives being pre cleared, and they both came through with no issues. I have no clue what that invalid integer error is either, but now it is gone simply by swapping a port. I think that you can get errors of that sort if you get read/write errors on the drive you are trying to preclear. The preclear is a bash script so it has limited error handling capability.
  9. Another point that is important is that if unRAID finds a file already exists I believe that it writes the updated file to that disk regardless of which disk it is on (although I could be wrong about that). That means that if you copy from diskX/sharenameY/fileZ to user/sharenameY/fileZ then you end up destroying the copy of fileZ on diskX. The safe thing to do is to rename the sharenameY folder on diskX so that it no longer corresponds to the user/sharenameY Is that too confusing?
  10. i had a similar problem at one point and it turned out to be due to the fact the disk controller card was not perfectly seated in the motherboard. It would work most of the time, but occasionally a disk would get a write error and then all disks on the controller were missing. I assume that a momentary disconnect due to vibration meant the controller was lost and this caused the issue. This sounds similar to what you are seeing? the seating issue was due to the backplate tending to lift one edge of the connector slightly. Resolving this stopped my problem. As a general rule any error that takes out all disks on a controller means that you need to look at the controller rather than individual drives.
  11. I know that in my MB BIOS there is a setting (that defaults to enabled) as to whether or not I see messages from add-on boards during the post + boot sequence. I wonder if the OP has a similar setting that is not enabled?
  12. If you want to be able to see shares for the individual disks then you need to turn on disk shares under Settings->Global Share settings. By default they are disabled to reduce the chance of users experiencing data loss by mixing disk shares and user shares in the same copy command.
  13. Difficult to tell without more information! The fact the disk gets disabled means that a write to it failed but that could be for a wide variety of reasons. I would suggest that you use the Tools->Diagnostics option and post the resulting zip file.
  14. I would assume that the post-read for a cycle also acts as the pre-read for the next one?
  15. I have noticed that I only seem to be able to click on a drive in the Unassigned Devices tab to get its SMART attibutes if I first mount it. Is there any reason why I cannot get them for drives that are not mounted?
  16. Great idea, but can I suggest that you change the message slightly to say that the unRAID server should be whitelisted rather than the specific page? Also if feedback suggests this is working well maybe it should be made a popup so it is more 'in-your-face and harder to ignore?
  17. Worth pointing out that you can set over-rides for individual disks by going to the settings for that disk (in case you also missed those )
  18. No need to set anything. Since preclear wipes any existing file system anyway it does not matter what is currently on the drive.
  19. Lovely, it's good to know there is such an option in SMB, that would probably resolve the IP camera issue. Still, I think it would be very nice to also be able to do some kind of IMG shares, as a quick and painless implementation of quotas for external users... One problem I would see with the img approach would be that the full space for the maximum quota for each user would need to be allocated up front. Something more dynamic would seem desirable unless you expect each user to use up their full quota.
  20. I was moving files from one disk to another today (using rsync) with the service enabled. I started getting lots of error messages on the directly attached monitor acting as a console (but not in the syslog) about sha256sum being unable to stat each temporary file that rsync creates while doing a transfer. I presume this was because rsync had renamed the file to its final name having completed the transfer so sha256sum could no longer find the file it was looking for. Is this to be expected? I disabled the service and that stopped the messages but thought I should report it in case it point some other deeper underlying issue. It would seem that it would be a good idea to suppress this type of error message to avoid cluttering up the console output (perhaps by redirecting it to /dev/null), but I guess this could have some undesirable side-effects for other types of errors?
  21. I think that drive has problems. The Pending Sector count needs to be 0 for the drive to be used successfully with unRAID and that SMART report for the drive shows 1453 (it has probably gone up since). That is probably why the count shown is incrementing so slowly - the drive is continually retrying to read sectors, and eventually giving up and marking them as 'pending' to indicate a read failure. I bet if you looked at the syslog it will be filled with read errors for that drive.
  22. In theory you can start again with the post-read, but that leaves open the question of why it failed in the first place! Things to do before trying again would be:Run the tools->Diagnostics option in case it has any information relating to why the preclear failed. You could post the ZIP file here if you want anyone else to look at it. See if you can obtain a SMART report for the drive (there should be a copy in the diagnostics file but doing it for just this drive will see if it may have dropped offline. Also the SMART report might indicate if the drive is having problems. Look in the preclear reports to see if there is anything useful about the earlier stages. Check cabling to the drive
  23. The process seems to be CPU bound so you probably do not want to run more parallel tasks than you have cores in your machine. Certainly I see that as soon as I increase the number of parallel tasks above the number of cores I have the ETA for all the tasks starts getting longer.
×
×
  • Create New...