Jump to content

warpspeed

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

0 Neutral

About warpspeed

  • Rank
    Member

Converted

  • Gender
    Undisclosed
  1. Just curious how or whether this affects unraid, given the use of XFS these days: https://lwn.net/Articles/774114/
  2. Facepalm, totally missed that. That'll teach me to read fast before coffee.
  3. Is the NFS issue mentioned in the previous releases fixed? Maybe it might be good to have a standard bit in the release notes of: "Known Issues" which says none, if there are none??
  4. I upgraded from 5.0.5 straight to 6.1.9 on Sunday. I did it by shutting my server down, backing up the config off the stick, formatting and putting a fresh 6.1.9 on, then copying back my key and config files. It's worked fine so far. With one exception, needing to add sec=sys to my NFS RW exports.
  5. warpspeed

    SMART - Pass 2

    Would be good if there could be a log of the state of SMART values at various points, starting from when a drive is first used. So then over time you can compare any changes to smart values and when they changed.
  6. That will work but that has never been a requirement. If a drive reports "unrecoverable write error" then unRaid will 'disable' that device - it has no choice because the data on that device is now wrong. If higher-level code immediately reads back the block that didn't write correctly then it will return the wrong data. We take great pains in the driver to makes sure disabling a device is 'synchronous' with the I/O stream so as not to permit this. Once a device is 'disabled', the normal course of action is: 1. Stop array 2. Yank bad drive 3. Install new drive 4. Start array - this will trigger a rebuild because unraid sees that a new device has been installed in a disabled disk slot One can also do this: 1. Stop array 2. Yank bad drive 3. Start array Now you are running with on-the-fly reconstruct active for all I/O to the missing device. The assumption is that sometime soon you will: 1. Stop array 2. Install new drive 3. Start array - this will trigger a rebuild because same reason above. If you wanted to try and re-use the "bad" drive you can thus do this little dance: 1. Stop array 2. Yank bad drive (or just unassign it) 3. Start array 4. Stop array 5. Reinstall bad drive (or re-assign it) - unraid will "think" this is a new drive because step 3 erased disk-id of that bad slot 6. Start array - this will trigger a rebuild Make sense? That's actually a really nice and brief summary of the options you have when a drive fails. Worth putting that somewhere prominent in the doco.
  7. And some more ... New webGUI New dashboard page Enhanced navigation for user and disk shares Enhanced plugins manager with support of release notes Enhanced interface for Docker installation Enhanced interface for VM installation Download individual SMART reports Download syslog information as zip file All valid points that I hadn't considered. These are great lists. I guess it's because a lot of the conversation has been about VM's and Docker.
  8. Along with P+Q dual parity? <big grin> I'd certainly like to see more of a focus on such things like this, given it is unRAID's core functionality. VM's seem to have been a big focus of 6.0. It'd be nice to see a focus on the core features and functionality. I fear now that the drive and focus is going to be more about VM features and functionality.
  9. Just to clarify, if I'm running rc16c, to upgrade, all I need to do is copy over the three files via SMB, whilst the server is running, and then reboot it? Or do I need to shut down my server, pull the USB stick, copy the files, reconnect the stick and then boot?
  10. Completely stock blank install of rc16c. Set timezone to my local timezone, NTP server to a local NTP server.
  11. Anyone notice a possible timezone display bug in 5.0rc16c? I set my timezone, and it's showing as: Current date & time: Sat Aug 10 14:26:31 2013 EIT EIT, I don't recognise that, it's not my local timezone. The actual time is showing correctly, it's just the timezone. This is not a critical bug, looks mostly cosmetic. Via the CLI, it's correct: root@Tower:~# date Sat Aug 10 15:03:22 CST 2013
  12. Regarding the kernel and netatalk, whilst I'm like everyone else and always like the latest and shiniest with all the bug fixes... reality is, this close to a 5.0 release, I think you should only be updating packages to fix a known issue, or if indeed there's a critical fix to something important (e.g. official reiserfs fix). Things are looking pretty stable and nice right now for a 5.0 release. Don't want to mess that up and end up with another half a dozen RC's If, after a 5.0 release, there are found to be further minor issues... then look at fixing them in a 5.0.1 release or similar.
  13. Hi folks, I'm currently running v4.5.6 with several drives, and a slackware install. I want to upgrade to 5.0 when it comes out so I can add some larger disks (3TB). Rather than trying to undo all the bits and pieces, and then progressively upgrade the drives in my system, I've decided that it'll be easier (and likely less riskier) to spool up a second free Unraid install at 5.0 on a different machine, then rsync everything from my live 4.5.6 system to the new 5.0 system. Then, once I've done that, wipe out the slackware install, and put a complete fresh 5.0 on my old 4.5.6 USB stick and move the drives back to the old system. So my old system goes from being a 4.5.6 with slackware on a HDD, to just 5.0 on a usb stick. My questions are: Does this sound sensible? and will I have issues when I swap the drives (1x Parity and 2x Data) back to the old system with a fresh 5.0 install (i.e. configuring the drives back into the proper array settings that I had temporarily on the other system?) Will unraid automatically recognise that the drives already have data on them? should I copy a config file with them? The reason I'm doing it this way, is that I only have one 'Plus' license key, keyed to my 4.5.6 USB stick. I can get away with using a free license on the temporary machine as all my data will fit on 2x 3TB disks.
  14. I pre-cleared three 3TB disks all at the same time, using screen. Worked a treat. Here's the results. Which, I believe are good based on what I've read. Intending to use the Seagate as my Parity drive and the WD-RED's for data. Going to get some more WD-RED's as well, just want to get a different batch. Interesting to see the difference in SMART values between the Seagate and the WD RED drives. Also the difference in timing. I started all three of these drives within about 3-5 minutes of each other. == ST3000DM001-9YN166 == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 7:00:56 (118 MB/s) == Last Cycle's Zeroing time : 5:16:14 (158 MB/s) == Last Cycle's Post Read Time : 13:35:56 (61 MB/s) == Last Cycle's Total Time : 25:54:07 == == Total Elapsed Time 25:54:07 == == Disk Start Temperature: 31C == == Current Disk Temperature: 32C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 116 102 6 ok 104832976 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 99 100 0 ok 1 Airflow_Temperature_Cel = 68 69 45 near_thresh 32 Temperature_Celsius = 32 31 0 ok 32 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. == WDC WD30EFRX-68AX9N0 == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 8:48:12 (94 MB/s) == Last Cycle's Zeroing time : 7:39:59 (108 MB/s) == Last Cycle's Post Read Time : 16:22:01 (50 MB/s) == Last Cycle's Total Time : 32:51:13 == == Total Elapsed Time 32:51:14 == == Disk Start Temperature: 29C == == Current Disk Temperature: 29C, == ============================================================================ No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. == WDC WD30EFRX-68AX9N0 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 9:03:15 (92 MB/s) == Last Cycle's Zeroing time : 7:53:56 (105 MB/s) == Last Cycle's Post Read Time : 16:48:33 (49 MB/s) == Last Cycle's Total Time : 33:46:46 == == Total Elapsed Time 33:46:46 == == Disk Start Temperature: 28C == == Current Disk Temperature: 27C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 123 122 0 ok 27 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change.
  15. Joe, in some of the doco, it mentions fro drives over 2.2TB that GPT/4K alignment is forced and the -a and -A options are ignored. I just started a pre-clear on some 3TB drives, and found that I still needed to use the "-A", and that it wasn't ignored. When I first ran pre-clear, it said that it would not 4k-align the drive. It was only after I added "-A" that it said it would. Using pre-clear 1.13 with command like so: ./preclear_disk.sh -r 65536 -w 65536 -b 2000 -A -M 4 /dev/sdc on a clean, brand new unRAID 5.0-rc15a (spun it up initially just to clear these new drives)