warpspeed

Members
  • Posts

    29
  • Joined

Everything posted by warpspeed

  1. These sorts of issues really shouldn't be in 'stable' releases. This seems to hardly be a 'stable' bug. Either that or unraid needs another release train like long term, or the release terms need to be re-jigged. e.g. Development (Alpha/Beta) i.e. 7.xxxx or 6.13.xxxxx Release-Candidate - installs any newer version release candidates (aka bleeding edge) i.e. 6.12.9-RC1 Latest (newest official release) - installs only latest official releases (basically today's stable) i.e. 6.12.8 Stable (Long-term, known stable releases only) - only installs known very stable, safe releases. i.e. 6.11.5.
  2. Take a look at the tailscale plugin, easiest way to solve your challenges
  3. Are you able to clarify exactly what this means. It it in the Docker containers advanced settings, the paths, variables, ports, etc, if there's fields defined and present but with no value in them? If thats the case, is it resolvable by defining or deleting those instead of installing the plugin? If we want to keep those around, is it possible to install the plugin before upgrading so things don't break?
  4. I'd like to see some significant effort go into improving the UX for the features that Unraid currently has, along with working on nothing but bug fixes for a while. In particular, I find that whenever I have a problem with my unraid server, e.g errors or a drive dying, I need to refer to the forums or reddit for good info and understanding on what to do, to not stuff anything up. Really, this should be embedded into unraids web interface. If something happens it should do its best to tell me the details of whats gone on and whether theres an issue, along with suggested corrective actions - including the order in which to do the corrective actions, etc. It'd also be good to see a concerted effort go into making unraid really stable and sort out the issues folks have. I'd like to see the update mechanism have channels for development, latest and stable. Where stable is only extremely mature versions with no known bugs for some time. I have always found myself staying behind due to the issues I see on the forums here. I never upgrade to .0, .1, .2, or even .3 releases. It's only after then that I start becoming interested. With the latest 6.12.6, I'm still skeptical, there still seems to be a large number of outstanding issues that some folks are having. I'm currently on 6.11.5 and have been since it was released. Prior to that I was on the 6.9 series.
  5. unbalanced should probably have a fresh thread with a closing post in this one linked to the new thread, then update the new plugin to the new thread
  6. Anyone confirm this is working on 6.11.5 yet?
  7. I'd really like to see some focus on improving self-guidance and help in the web UI, especially for problems, errors, etc. Generally when unraid works, there's not much reason for this, but I've found in almost all instances when I want to make major changes and/or if I get errors (SMART, read, write, etc.) or disk offline, etc. I find the first thing I need to do, is visit the forums or google to discover and understand where I actually stand with it and what my options are. It'd be really good if some effort could perhaps go into making the web UI a little more informative and user friendly... perhaps providing steps or guidance on some common things. A good example was a little while ago I had some read errors from my parity drive during a parity check. No write errors, just read. I didn't know what that actually meant for me. So I spent time looking in the forums and also googling. Google led me to a post on reddit or in the forums where someone described what unraid actually does when it gets a read error, and how it applies to that situation. I was thinking it'd be really good if the web UI actually had guidance like this. e.g rather than just popping up a red alert saying errors... read error... that it could actually pop up, explain the read error, what it most likely is, the fact that there was no write error, and that it could be indicative of a possible issue, what to keep an eye on, and any other sensible recommendations. Thats just one example, there are plenty more of things like that where some steps or guidance or even wizard like steps could be useful. Some things like this may also help unraid become a bit more mainstream too, and would help improve unraids core and primary feature... it seems these days that all effort is going into adding bells and whistles to support ever growing and more complex environments, dockers, vms, etc. while there are likely a whole swag of usability improvements that could be made to the UI for the core features.
  8. is there a way to migrate from the docker to the plugin without the tailscale ip changing?
  9. That's good to know BLKMGK, I'm still sitting on 6.9.2 but now considering upgrading to 6.11.4.
  10. I have a suggestion for Fix common problems upgrade assistant. I read in another thread somewhere that there's a XFS programs update or similar between 6.9.x and 6.10/11.x, which caused some folks arrays not to mount due to some minor forms of corruption. I wonder if a XFS check could be run across array drives by upgrade assistant (or even just by fix common problems) to make sure this isn't an issue.
  11. Just curious how or whether this affects unraid, given the use of XFS these days: https://lwn.net/Articles/774114/
  12. Facepalm, totally missed that. That'll teach me to read fast before coffee.
  13. Is the NFS issue mentioned in the previous releases fixed? Maybe it might be good to have a standard bit in the release notes of: "Known Issues" which says none, if there are none??
  14. I upgraded from 5.0.5 straight to 6.1.9 on Sunday. I did it by shutting my server down, backing up the config off the stick, formatting and putting a fresh 6.1.9 on, then copying back my key and config files. It's worked fine so far. With one exception, needing to add sec=sys to my NFS RW exports.
  15. Would be good if there could be a log of the state of SMART values at various points, starting from when a drive is first used. So then over time you can compare any changes to smart values and when they changed.
  16. That will work but that has never been a requirement. If a drive reports "unrecoverable write error" then unRaid will 'disable' that device - it has no choice because the data on that device is now wrong. If higher-level code immediately reads back the block that didn't write correctly then it will return the wrong data. We take great pains in the driver to makes sure disabling a device is 'synchronous' with the I/O stream so as not to permit this. Once a device is 'disabled', the normal course of action is: 1. Stop array 2. Yank bad drive 3. Install new drive 4. Start array - this will trigger a rebuild because unraid sees that a new device has been installed in a disabled disk slot One can also do this: 1. Stop array 2. Yank bad drive 3. Start array Now you are running with on-the-fly reconstruct active for all I/O to the missing device. The assumption is that sometime soon you will: 1. Stop array 2. Install new drive 3. Start array - this will trigger a rebuild because same reason above. If you wanted to try and re-use the "bad" drive you can thus do this little dance: 1. Stop array 2. Yank bad drive (or just unassign it) 3. Start array 4. Stop array 5. Reinstall bad drive (or re-assign it) - unraid will "think" this is a new drive because step 3 erased disk-id of that bad slot 6. Start array - this will trigger a rebuild Make sense? That's actually a really nice and brief summary of the options you have when a drive fails. Worth putting that somewhere prominent in the doco.
  17. And some more ... New webGUI New dashboard page Enhanced navigation for user and disk shares Enhanced plugins manager with support of release notes Enhanced interface for Docker installation Enhanced interface for VM installation Download individual SMART reports Download syslog information as zip file All valid points that I hadn't considered. These are great lists. I guess it's because a lot of the conversation has been about VM's and Docker.
  18. Along with P+Q dual parity? <big grin> I'd certainly like to see more of a focus on such things like this, given it is unRAID's core functionality. VM's seem to have been a big focus of 6.0. It'd be nice to see a focus on the core features and functionality. I fear now that the drive and focus is going to be more about VM features and functionality.
  19. Just to clarify, if I'm running rc16c, to upgrade, all I need to do is copy over the three files via SMB, whilst the server is running, and then reboot it? Or do I need to shut down my server, pull the USB stick, copy the files, reconnect the stick and then boot?
  20. Completely stock blank install of rc16c. Set timezone to my local timezone, NTP server to a local NTP server.
  21. Anyone notice a possible timezone display bug in 5.0rc16c? I set my timezone, and it's showing as: Current date & time: Sat Aug 10 14:26:31 2013 EIT EIT, I don't recognise that, it's not my local timezone. The actual time is showing correctly, it's just the timezone. This is not a critical bug, looks mostly cosmetic. Via the CLI, it's correct: root@Tower:~# date Sat Aug 10 15:03:22 CST 2013
  22. Regarding the kernel and netatalk, whilst I'm like everyone else and always like the latest and shiniest with all the bug fixes... reality is, this close to a 5.0 release, I think you should only be updating packages to fix a known issue, or if indeed there's a critical fix to something important (e.g. official reiserfs fix). Things are looking pretty stable and nice right now for a 5.0 release. Don't want to mess that up and end up with another half a dozen RC's If, after a 5.0 release, there are found to be further minor issues... then look at fixing them in a 5.0.1 release or similar.
  23. Hi folks, I'm currently running v4.5.6 with several drives, and a slackware install. I want to upgrade to 5.0 when it comes out so I can add some larger disks (3TB). Rather than trying to undo all the bits and pieces, and then progressively upgrade the drives in my system, I've decided that it'll be easier (and likely less riskier) to spool up a second free Unraid install at 5.0 on a different machine, then rsync everything from my live 4.5.6 system to the new 5.0 system. Then, once I've done that, wipe out the slackware install, and put a complete fresh 5.0 on my old 4.5.6 USB stick and move the drives back to the old system. So my old system goes from being a 4.5.6 with slackware on a HDD, to just 5.0 on a usb stick. My questions are: Does this sound sensible? and will I have issues when I swap the drives (1x Parity and 2x Data) back to the old system with a fresh 5.0 install (i.e. configuring the drives back into the proper array settings that I had temporarily on the other system?) Will unraid automatically recognise that the drives already have data on them? should I copy a config file with them? The reason I'm doing it this way, is that I only have one 'Plus' license key, keyed to my 4.5.6 USB stick. I can get away with using a free license on the temporary machine as all my data will fit on 2x 3TB disks.
  24. I pre-cleared three 3TB disks all at the same time, using screen. Worked a treat. Here's the results. Which, I believe are good based on what I've read. Intending to use the Seagate as my Parity drive and the WD-RED's for data. Going to get some more WD-RED's as well, just want to get a different batch. Interesting to see the difference in SMART values between the Seagate and the WD RED drives. Also the difference in timing. I started all three of these drives within about 3-5 minutes of each other. == ST3000DM001-9YN166 == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 7:00:56 (118 MB/s) == Last Cycle's Zeroing time : 5:16:14 (158 MB/s) == Last Cycle's Post Read Time : 13:35:56 (61 MB/s) == Last Cycle's Total Time : 25:54:07 == == Total Elapsed Time 25:54:07 == == Disk Start Temperature: 31C == == Current Disk Temperature: 32C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 116 102 6 ok 104832976 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 99 100 0 ok 1 Airflow_Temperature_Cel = 68 69 45 near_thresh 32 Temperature_Celsius = 32 31 0 ok 32 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. == WDC WD30EFRX-68AX9N0 == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 8:48:12 (94 MB/s) == Last Cycle's Zeroing time : 7:39:59 (108 MB/s) == Last Cycle's Post Read Time : 16:22:01 (50 MB/s) == Last Cycle's Total Time : 32:51:13 == == Total Elapsed Time 32:51:14 == == Disk Start Temperature: 29C == == Current Disk Temperature: 29C, == ============================================================================ No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. == WDC WD30EFRX-68AX9N0 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 9:03:15 (92 MB/s) == Last Cycle's Zeroing time : 7:53:56 (105 MB/s) == Last Cycle's Post Read Time : 16:48:33 (49 MB/s) == Last Cycle's Total Time : 33:46:46 == == Total Elapsed Time 33:46:46 == == Disk Start Temperature: 28C == == Current Disk Temperature: 27C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 123 122 0 ok 27 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change.
  25. Joe, in some of the doco, it mentions fro drives over 2.2TB that GPT/4K alignment is forced and the -a and -A options are ignored. I just started a pre-clear on some 3TB drives, and found that I still needed to use the "-A", and that it wasn't ignored. When I first ran pre-clear, it said that it would not 4k-align the drive. It was only after I added "-A" that it said it would. Using pre-clear 1.13 with command like so: ./preclear_disk.sh -r 65536 -w 65536 -b 2000 -A -M 4 /dev/sdc on a clean, brand new unRAID 5.0-rc15a (spun it up initially just to clear these new drives)