Jump to content

JustinChase

Members
  • Posts

    2,156
  • Joined

  • Last visited

Everything posted by JustinChase

  1. I just finished a preclear on a new drive. There are a couple of items showing "near threshold". Should I be concerned? ** Changed attributes in files: /tmp/smart_start_sdi /tmp/smart_finish_sdi ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 118 100 6 ok 179537912 Spin_Up_Time = 98 92 0 ok 0 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 99 100 0 ok 1 Airflow_Temperature_Cel = 75 74 45 ok 25 Temperature_Celsius = 25 26 0 near_thresh 25 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change.
  2. I noticed lots of empty folders while going thru this process myself. I used this to remove the empty directories; and it seemed to work fine... -type d -exec rmdir {} + 2>/dev/null I also tried this, which also seemed to work fine... find . -type d -empty -delete
  3. thank you for this! it was very helpful to me in finding and merging/deleting my duplicates after a disk problem. It took me quite a while to open the different disks to move the files around, but after 3 or 4 runs of this utility, it says I've got them all now. nice! now I'm running Joe's script to see what I have that ma have different names. Fun stuff
  4. I would start a Preclear but skip the Pre-read (use preclear_disk.sh -W) It took 48 hours to finish... ========================================================================1.15 == invoked as: ./preclear_disk.sh -A -W /dev/sdc == WDCWD20EARS-00MVWB0 WD-WCAZA2130725 == Disk /dev/sdc has been successfully precleared == with a starting sector of 64 == Ran 1 cycle == == Last Cycle's Zeroing time : 28:50:44 (19 MB/s) == Last Cycle's Total Time : 47:41:53 == == Total Elapsed Time 47:41:53 == == Disk Start Temperature: 35C == == Current Disk Temperature: 39C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 189 200 51 ok 21640 Reallocated_Sector_Ct = 142 184 140 near_thresh 1108 Seek_Error_Rate = 100 200 0 ok 0 Temperature_Celsius = 111 115 0 ok 39 Reallocated_Event_Count = 1 1 0 near_thresh 1108 Current_Pending_Sector = 3 1 0 near_thresh 64234 No SMART attributes are FAILING_NOW 65021 sectors were pending re-allocation before the start of the preclear. 64230 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 64234 sectors are pending re-allocation at the end of the preclear, a change of -787 in the number of sectors pending re-allocation. 317 sectors had been re-allocated before the start of the preclear. 1108 sectors are re-allocated at the end of the preclear, a change of 791 in the number of sectors re-allocated. ============================================================================ Looks like I've got myself a paperweight now, no?
  5. I didn't see that until you mentioned it. It doesn't look like he edited his post, so I must be blind. I only found the -n option, which skips pre and post read it seems. Once it finishes, if it still looks like it might work, I'll run it again with the -W option, as one more test. thanks for pointing it out for me.
  6. Thanks, I'll re-run the pre-clear (have to find the skip pre-read option again), and will post back if I have any more questions. No problem about your tone. Sometimes I'm 'the one that broke the camels' back', and I get the brunt of pent up frustration. I didn't take it personal. I do appreciate the help. Thanks again.
  7. Edit: I apologize, I've just noticed elsewhere that you have included a syslog. [respectful chide mode on] It was hard to know how to respond. You have a drive that had been red-balled, and was now running extremely slowly, and yet it does not appear that you thought to check or include a SMART report and the syslog... [chide mode off] I understand your frustration, but since this is a fresh install, only being used to pre-clear, and the pre-clear was/is still running, I don't know of any way to get a syslog at this time. i suppose I could stop the pre-clear, and get a syslog, and considering how slowly it's moving, I don't suppose I really lose much by doing this. Currently, the pre-read is only 2% complete after 4 hours. If that's all the faster this drive is now able to function, it's useless to me, so I could just trash it. I posted in the hopes that the thorough information/history that I included might provide enough clues to be useful. I also mentioned in my follow up post that I may need to learn how to do a smart report, because I don't know how to do that right now. I cancelled the preclear. syslog and short smart test log attached. to my uneducated eye, it doesn't look good... Thanks for your help. syslog.txt smartshort.txt
  8. initial signs are that this was not the problem. it started out at 27.7 MB/s, quickly dropped to 8.6, and now I'm waiting to see the next update. This seems to be the same as the last time I ran it. I wonder if this might do any better if I formatted it in windows, then tried pre-clearing it. I may need to see how to run a smart report on it, so see if it's 'okay' per SMART.
  9. I'm trying to preclear a drive that had red-balled on my, but I suspect it was just due to a cabling issue. I've since replaced that drive with a larger one, and am now trying to preclear the drive that 'failed' to confirm I can add it back to the array. I've put a stock version of unraid 6beta9 on a newly formatted USB, and put it and the drive into a different computer for this process. I've booted and started the process, and it seems to have started fine, but the onscreen info is showing it to be going VERY slow. Current stats are Disk Pre-Read in progress: 0% complete ( 4,201,881,600 bytes of 2,000,398,934,016 read ) 3.7 MB/s Disk Temperature: 36C, Elapsed Time 1:02:50 by my calculations, it's only .2% done, and will take about 500 hours to finish just the pre-read. Obviously something is wrong (3.7MB/s is not good). So, is this an indication that the drive is just bad after all, or is there possibly something else at play here? Ideas on how to proceed?
  10. it seems setting locale is another important think that needs to be done in dockers, see discussion here... http://lime-technology.com/forum/index.php?topic=34168.msg317791#msg317791 Also, do we need/want versioning for dockers? If needo needs/wants to add/update locale in his dockers, the versions I have installed/running now would benefit from updating, but how would one know that? Perhaps versioning can help with the docker plugin knowing we need updates, and prompt us to do so.
  11. As a general principal, I would suggest to never permanently transcode files to a lower quality based on hardware/software you currently own. They both improve constantly, and someday, you won't need to have a lower quality file to do what you want. However, once transcoded, you can't get the higher quality file back without re-ripping or re-downloading the file. I would suggest instead you try to find a way to transcode on the fly to a device today, and keep the higher quality file for the future. Just my opinion.
  12. Sounds great, I'm looking forward to seeing a working solution.
  13. Okay Has anyone tried making a docker for makeMKV? I know there is a linux version, but it looks like it needs compiled, and they usually have a new version about once/month, so it probably will need 'constant' maintenance. I'm not sure if building one will be easy or good, but I'd really like to put the work of creating my mkv files onto my unRAID box, and not have to continue doing it on my laptop, over wifi, which takes forever. maybe this helps one get started... http://www.makemkv.com/forum2/viewtopic.php?f=3&t=2047
  14. I totally disagree. Docker is HUGE and most unRAID users are going to use it over anything I have done with Xen / KVM. Virtualization certainly has it's place but for most unRAID users, Docker is way to go. Slackware and root ram file system is what unRAID is going to stick with for the foreseeable future and thankfully with unRAID choosing to install / use Docker... It is a great solution and solves some of the issues in the current unRAID environment. Linux Dorks like me have come and gone and unRAID didn't miss a beat. However, it's the WebGUI guys (you included) that are most valuable / needed. It's one thing that I can make unRAID do X, Y and Z but without people like you... The average unRAID user doesn't have the experience / knowledge / comfort level to drop down to a Linux command line to be able to use those things I share / create effectively / easily. My parting advice to you (and needo)... It's VERY IMPORTANT that both of you go learn about Linux control groups and work with unRAID to implement that into what you both are doing. They provide a lot of very useful metrics, but they also help to ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. Also, since Docker 1.0 just came out and a lot more people are now using it there will be A LOT of releases over several months. They have already released Docker 1.1.0 and 1.1.1 is about to be released. There are many bug fixes and new features that you and needo will want to take advantage of as this moves forward. In your case, they are constantly working on the Docker APIs and there will be many updates / changes / additions in the next few months. Assuming unRAID rolls out Betas and updates Docker to the latest stable release... With those, it will allow you to share / show even more and give us the ability to make Docker easier to support / manage in your WebGUI. You will see they added some things for stop / stop, a pause and sockets parameter. Thanks again for all that you and needo have done. I personally have used both of what you two have developed and learned a great deal about Docker in the process. And I disagree with your disagreeing You have been very helpful, and have moved unRAID forward a lot with your ideas, suggestions, help and 'insistance'. Your tone is not one of 'friendliness' and certainly puts people on guard or on edge, which is not conducive to your point being taken and implemented, but the content is so good that it still has made an impact. Your knowledge is of the 'inner workings' and not so much of the 'usability' bent, I agree, but it's your input that allows people like needo and gfjardim to do what they do, because the inner workings get put into place in the first place, allowing them to 'pretty it up' for the rest of us. I wish I had a magic button that I could press to 'un-grumpify' your posts so the information was presented with friendliness and others would not take offense where (I think) you don't intend to offend. Either way, we're all adults here, and I hope unRAID continues to improve for everyone's sake. Your input can certainly help with that, but I can also understand not wanting to contribute after being told to leave. Now, something we can all feel good about...
  15. Yeah, but she's already your girlfriend, so what difference does it make now
  16. Thanks again grumpy. I've seen you mention a couple of smaller linux distros for use with docker, but I didn't notice that they were specifically designed for docker. If you were going to create a docker for yourself, to run SABnzbd, what would you, personally, use as a base?
  17. Which is why I was asking why Lonix didn't like it as a base. It sounds like Phusion has 'resolved' most/all the problems with ubuntu as a base, other than being 'too big'. Whether or not we can or need to shave a few hundred MB off the size of the base image can be debated, but for me, a few hundred MB on a 500GB drive isn't something I'm worried about. His comments indicated that he didn't like things specific to what Phusion did, and I was just curious what specific things he disagreed with, or doesn't like about the Phusion image.
  18. I didn't read it all, but it sure seems like a long article on why SSH isn't needed. It doesn't sound like SSH is bad, nor does it give any other 'problems' with what Phusion is doing. Even if SSH isn't needed, it doesn't seem to be 'wrong' to enable it, so I still don't see the 'problem' with using Phusion.
  19. Not to question your opinion, but for clarification only, why do you not trust/agree with his choices, specifically? With me being pretty incompetent in all this still, his explanations sounded reasonable/solid to me. What do you disagree with, and what would you recommend instead? Just curious. Thanks.
  20. any updates from either of the main contributors on this GUI?
  21. This is the report for the preclear that just finished... It looks like this drive is okay, and can be put into production. Is that an accurate assessment? You suggested earlier that it might be power supply issue causing the issues with the preclear of the 1TB drive, which might have also been the cause of the re-allocated sectors of this drive, I suppose. I'm betting you're going to suggest that I do one more preclear to confirm that there are no issues one more time, but I'm out of space, and don't want to wait another 35 hours to put this disk to use, unless you really think it's necessary. Is it?
  22. That is what I'd do. Well, I don't generally like the idea of using adaptors or splitters for powering the drives, and the SATA cables on my current power supply only have 3 SATA connectors on each cable, but I have 4 drives in each rack, so it's a bit convoluted to connect power to all the drives as it sits; so... I bought a new 650W modular supply with 2 cables with 4 SATA connectors each, and am ordering a third cable with 4 connectors, which will allow me to connect all 12 drives directly to the PS without any adaptors. It will also provide more power, which I doubt I need, but it was the only PS that had the cabling that I needed at the best price, and it's GOLD certified, so it should be more efficient. Once I get the new PS, I'll replace the old one, and get all the drives connected and see if that helps my issues. I doubt this will resolve everything, but it will eliminate the PS as a potential source of the problem, and will make future usage easier for me also. I'm running a preclear on the 3TB drive now, and hope that this time it finishes without problem, since I REALLY need to get some files moved to make space on the existing drives soon.
  23. sadly, still sectors need to be re-allocated I guess I'm gonna just have to order a new drive.
×
×
  • Create New...