Jump to content

gfjardim

Community Developer
  • Posts

    2,213
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by gfjardim

  1. You can issue a PR with a new xml to me, but I'd rather you fork it then I don't have to worry about it. And, you're not alone in missing the PMs. A few other authors missed them also. Can you send me the message again explaining how this work?
  2. There are currently two versions of preclear: one that is the current stable version and has a version number of 2015.11.18 and a testing beta version released today. The first one use Joe L. script to preclear; the new one uses a new script I wrote, but still on beta. I hope you realize that for CA to display the beta version, you're going to have to fork the plugin repo (sent you a PM a while back) https://github.com/Squidly271/gfjardims-plugin-repository/ (should probably do it anyways as you can then take advantage of what CA offers) I honestly missed that. I need to fork everything or just add a new xml file on it?
  3. There are currently two versions of preclear: one that is the current stable version and has a version number of 2015.11.18 and a testing beta version released today. The first one use Joe L. script to preclear; the new one uses a new script I wrote, but still on beta.
  4. That's expected, since the beta plugin is not compatible with Unassigned Devices. Please go to Settings>Preclear Disk Beta
  5. For me only while writing, no issues when it’s reading. Pretty sure is the sync command hanging the webui. When it hangs stoping the array, status line on the WebGUI displays "syncing..." New update 2016.03.20a which pause the preclear while starting/stopping the array. Please let me know if it helps.
  6. For me only while writing, no issues when it’s reading. Pretty sure is the sync command hanging the webui. When it hangs stoping the array, status line on the WebGUI displays "syncing..." Yep, I had to kill the preclear session to be able to start the array: Mar 20 11:13:05 Servidor emhttp: shcmd (1466): sync Mar 20 11:15:32 Servidor sshd[31777]: Accepted password for root from 192.168.0.60 port 64225 ssh2 Mar 20 11:18:05 Servidor emhttp: shcmd (1467): mkdir /mnt/user0 I tried to use nice to schedule dd command to a lower priority, but that wasn't enough... Will try to add a pause function on the script, if possible.
  7. For me only while writing, no issues when it’s reading. Pretty sure is the sync command hanging the webui.
  8. That sounds like a good idea, because normally stopping the array during a preclear sometimes also takes a long time. Is this happen only on writes or on reads too?
  9. Evaluating the log you posted, this is a unRAID's problem. Nothing in the plugin code has blocked the webui. Maybe a sync command that take ages because the drive is being written to. What I could do is add a "pause" function to read/write operations of preclear disk script that could be triggered while array is starting/stopping. What do you think?
  10. Just updated to a more verbose logging. Please stop the current preclear operations, using this command: tmux kill-session -t "preclear_disk_sdX" Start a new preclear session, wait until it hangs the webui and then send me a copy of /var/log/preclear.disk.beta.log file.
  11. Don't have UD installed. hum, that's new. Every time i saw this behavior, was due to UD probing preclearing disks. Preclear Disk was designed to probe display info only once and temperature every 5 minutes. I'll update the beta plugin to be more verbose about execution timing. Let's see what is hanging the webui. I'll let you know when and how to test it, ok?
  12. I assume you all have Unassigned Devices installed. UD caches very little of the information it presents. When you start a clear operation, the hard drive starts to be hammered very hard, and some ops, like SMART probing, takes a lot to be processed. The workaround is UD start to cache most of it's info, but the last time I tried, some things got borked. It's not a trivial job.
  13. Hi guys. Since unRAID 6.2beta19, we have a workaround to that odd behavior that prevents Docker containers to access disks mounted after the Docker service starts. It's called Slave Mounts, and it permits the host to share it's mounts with Docker containers. To enable the workaround, you must set the "/mnt/disks" path's mode to RW/Slave, like this: After saving the container, you should be able to see every disks mounted using Unassigned Devices. A special thanks to Lime Technologies for being open minded about this.
  14. Not every disk gets added, only those you change any of the configuration options, like Auto Mount/Share/Script.
  15. ############################################################################################# # # # unRAID Server Pre-Clear of disk /dev/sdi # # # # # # # # Model Family: Seagate Desktop HDD.15 # # Device Model: ST4000DM000-1F2168 # # Serial Number: S301HZ0B # # User Capacity: 4,000,787,030,016 bytes [4.00 TB] # # Firmware: CC54 # # Disk Device: /dev/sdi # # # # # # # # Type Yes to proceed: # # # # # ############################################################################################# # # ############################################################################################# ############################################################################################# # # # unRAID Server Pre-Clear of disk /dev/sdi # # Cycle 1 of 1, partition start on sector 64. # # # # # # # # Step 1 of 4 - Zeroing the disk: [@ 140 MB/s] SUCCESS # # Step 2 of 4 - Writing unRAID's Preclear signature: SUCCESS # # Step 3 of 4 - Verifying unRAID's Preclear signature: SUCCESS # # Step 4 of 4 - Post-Read verification: [@ 135 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################# # Cycle elapsed time: 16:06:17 | Total elapsed time: 16:06:17 # ############################################################################################# The good news is that apparently I found that bug preventing >2TB disks to be precleared. Spent 16 hours for a 4TB disk, not bad. With a pre-read step, it would take 24 hours per cycle. Plugin updated.
  16. That's not a show stopper, we could always add that to Docker Manager if necessary.
  17. Yes, I know it's a lot of work, but the current format + CA own elements is a hell to support, to begin with. Plus, they don't survive the Docker Manager edit, so user templates lack a lot of info. A possible future step is to include something like CA into unRAID itself, so we need to make the changes right now to make it future proof. And if you don't want to change your templates you don't have to, the Docker Manager will remain backward-compatible with them, so no muss no fuss there. In a collective of developers like LS, some tasks can always be delegated, if I'm not mistaken.
  18. Because I want to make things easier for both users and developers. Why force the manual edit of a template if we can make that available from within the webgui? The "Dry Run" will expose the XML, that can be copied/pasted in any text editor directly, so there will be no need to edit by hand lots of code... I'll add a dropdown menu for all those categories you have in place at the Categorizer plugin,and add a Status:Beta|Stable category there. Other thing I'm considering implementing is the ability to update the current user template with the default template on GitHub. It will work like this: I'll add a <Template> element with the template URL. Once you hit Edit, it will download the default template, add any "Config" element that is new and update all attributes like Name, Mode etc... This way, we can keep things in sync between developers and users. What do you think?
  19. Squid, it's time to add things in Docker Manager. My two cents is that we could ditch <Beta>(add that to <Categories> element), move <Date> to <Version>(YYYY.MM.DD format like it's done in plugins) and adopt <Changes> as a history log. Not sure about <License> or <Project> tho. Please I need enlightenment me about <MinVer> and <MaxVer>. How are they important?
  20. Thank you both for your testing! I just tested on <2TB disks, so it's likely something is wrong with larger disks. Well, the good part is that we know that verification part is working fine I'll keep you posted when I got this fixed, ok? Cheers.
  21. I do understand Joe L. position, and I don't blame him. Last year's end I got real busy and when I got back three months later a parallel version of Unassigned Devices was being developed by dlandon. I got real mad about him at first, but then I realize he was doing a better job supporting it than I do, so I GPL'ed the code and let him take care of it. In fact, now I thank him to do that work for me . I do agree. In a community like ours, a lot of users do want to help, sometimes with feedback, sometimes helping other less experienced users. You don't need to do all by yourself. Delegation and cooperation is the key to maintain an open source project. Thank you a lot for your kind words! It began with ljm42 asking me to include the script within the plugin, because many users were having problems with two versions of it, the disks.cfg issue etc... Then I started to patch the original code, but when I fixed one thing, another one got borked, so after some time, I do realize that a major rewrite was needed. I haven't started that part yet, so I'll need to study this a little more. I definitely will make things more readable. If could be done, but I think it would make things messier. The good thing is that if we want to use badblocks instead of the current method, we could just write a new function and then add a option. It's a thing we should evaluate, tho. The next crucial step is to add the SMART monitoring function, and then we could evaluate the use of badblocks etc... In my code, post-reads are just one operation: read a bunch of sectors, sum them and then verify if the result is equal to 0. All reads are defaulted to hammer the read/write heads with Joe L. method of reading three random sectors, than the first and the last one, but it can be disabled from the command line. Again, thanks a lot for this message, I really appreciate your ideas and your feedback.
  22. This is the greatest test. If unRAID accept it as precleared and a parity check run without sync errors, then the preclear is successful. Thank a lot for your input, johnnie.black!
  23. Here you go. Apparently it didn't update correctly. Try to update to 2016.03.12a and then test it.
  24. Updated to 2016.03.12 and still get same error, let me know of need any more info. Please do the test again and send me the file.
×
×
  • Create New...