bonienl Posted February 22, 2015 Share Posted February 22, 2015 @limetech Minor thing in the Archived Notifications tab. I've gone to delete the notifications I had received but am unable to delete the last one. Not sure if it is because this last notification is a warning or not...(in my case, my ssd cache drive has 2 reallocated sectors.) Is it intentional for this type of warning notification to stick around for tracking purposes and will be removable once the status changes again (ie. fixed or got worse.)? cheers, gwl No, it should be removable. Thanks for reporting, look into it. Quote Link to comment
bonienl Posted February 22, 2015 Share Posted February 22, 2015 Another nit-picking item. On the GUI, if you go the the 'Settings', 'Global Share Setting', 'Cache Settings' and (I do not have a cache drive installed), the 'Done' button is inactive. It does not return to the 'Settings' page as all of the other 'Done' buttons do. Seen that and corrected, also made the display of 'Cache Settings' conditional, as it will be shown only if there is actually a cache drive present. Nice! While you're at it... If array started, and no cache disk/pool, the 'Cache' section/tab of the Main page should not display. Hmm, that should have been the case already I have a look at the condition definition. Thx. Quote Link to comment
bonienl Posted February 22, 2015 Share Posted February 22, 2015 Another nit-picking item. On the GUI, if you go the the 'Settings', 'Global Share Setting', 'Cache Settings' and (I do not have a cache drive installed), the 'Done' button is inactive. It does not return to the 'Settings' page as all of the other 'Done' buttons do. Seen that and corrected, also made the display of 'Cache Settings' conditional, as it will be shown only if there is actually a cache drive present. Nice! While you're at it... If array started, and no cache disk/pool, the 'Cache' section/tab of the Main page should not display. Hmm, that should have been the case already I have a look at the condition definition. Thx. Ok, made a fix for that as well. Quote Link to comment
subwars Posted February 22, 2015 Share Posted February 22, 2015 Thought it was about time i took the plunge and upgraded from vs 5, everythings working except while I'm playing back a video file from my htpc, it keeps pausing, like as if it's buffering or something? any ideas edit: just pulled a few files across from the server to the htpc hdd, and it copied across at good speeds 48~MB/sec Quote Link to comment
olympia Posted February 22, 2015 Share Posted February 22, 2015 I'd imagine they want to find and fix the disk spinup issue before 6.0 RC. I agree - but that is not a new feature! Since it only seems to affect a proportion of users if fixing it proves a bit intractable then I can see this being something that could still be outstanding as a "Known Issue" even when v6 goes final. After all it is not something that can lead to data loss or something like that. The downside is the cost of the power (and possibly extra noise/heat) due to disks not spinning down. Some have worried about disk lifetime due to this issue but much of the evidence seems to point to it increasing disk lifetime as spinning disks up/down is harder on them than continually spinning.. This would be a very weird approach at the very least. Spinning down drives is one of the core and greatest feature of unRAID. Releasing v6 with a known issue like that would be just like I said - VERY strange. Just because you are not affected or just don't care about this, you shouldn't think this is OK. ...and there were smaller than this bugs holding up a release in the past. Quote Link to comment
HellDiverUK Posted February 22, 2015 Share Posted February 22, 2015 Just a to to say that everything so far seems almost perfect for me on b14 (then again b13 was fine too). Good effort guys, looking forward to RC1 (I think it's ready once you fix the Haswell throttling thing). Quote Link to comment
HellDiverUK Posted February 22, 2015 Share Posted February 22, 2015 OK, scratch my last post. Whiskey Tango Foxtrot? What is with the force preclear of a new drive? I know the drive I am adding is OK. I'm now looking at hours to preclear a drive that doesn't need precleared. No array. No server. So, back to the Whiskey Tango Foxtrot thing. NOT impressed. Why isn't there a "I know this drive is fine, just get on with it and add it to the array" option? Quote Link to comment
dikkiedirk Posted February 22, 2015 Share Posted February 22, 2015 Is there a way around the need to re-create the docker image? I don't like doing it. If I backup the cache drive and format it to XFS then copy back all stuff and then upgrade to beta 14, would that work? Quote Link to comment
itimpi Posted February 22, 2015 Share Posted February 22, 2015 OK, scratch my last post. Whiskey Tango Foxtrot? What is with the force preclear of a new drive? I know the drive I am adding is OK. I'm now looking at hours to preclear a drive that doesn't need precleared. No array. No server. So, back to the Whiskey Tango Foxtrot thing. NOT impressed. Why isn't there a "I know this drive is fine, just get on with it and add it to the array" option? How do you know the disk has been zeroed (not just fine) if you have not pre-cleared it first! If the disk is not correctly zeroed then you cannot just add it without invalidating parity. Avoiding the downtime of unRAID array zeroing I is one of the advantages of using the pre-clear script. Quote Link to comment
PeterB Posted February 22, 2015 Share Posted February 22, 2015 I know the drive I am adding is OK. OK? In what respect? I'm now looking at hours to preclear a drive that doesn't need precleared. Are you saying that you'd already precleared it, using JoeL's preclear script? No array. No server. So, back to the Whiskey Tango Foxtrot thing. NOT impressed. Why isn't there a "I know this drive is fine I ask again - fine in what respect? , just get on with it and add it to the array" option? You do realise that a new drive has to be guaranteed to have zeros written to every byte of the data area, otherwise it invalidates your parity? Quote Link to comment
NAS Posted February 22, 2015 Share Posted February 22, 2015 Is there a way around the need to re-create the docker image? I don't like doing it. If I backup the cache drive and format it to XFS then copy back all stuff and then upgrade to beta 14, would that work? Not that I know of. The format of the virtual disk within the new image file is fundamentally different and fixes a serious bug. However recreating this image should be explained as currently their is no procedure? Quote Link to comment
ChaOConnor Posted February 22, 2015 Share Posted February 22, 2015 Question: since updating to the new beta, after each reboot my docker containers are gone and I have to recreate them. The docker image itself is there, but in the GUI, there are no containers. I've tried it twice now. I install something (SAB let's say), runs great, no problems. I reboot, it's all gone. Any ideas? FWIW: the docker is on a SNAP disk. Previously it was under a user share on the SNAP disk (Applications), but after the upgrade I put it directly on the SNAP disk (/mnt/disk) Thanks! Quote Link to comment
BRiT Posted February 22, 2015 Share Posted February 22, 2015 Question: since updating to the new beta, after each reboot my docker containers are gone and I have to recreate them. The docker image itself is there, but in the GUI, there are no containers. I've tried it twice now. I install something (SAB let's say), runs great, no problems. I reboot, it's all gone. Any ideas? FWIW: the docker is on a SNAP disk. Previously it was under a user share on the SNAP disk (Applications), but after the upgrade I put it directly on the SNAP disk (/mnt/disk) Thanks! I think the snap disk isnt mounted to the system until well after Docker starts. At best you're in a race condition that previously you might have been winning but now you're losing. Quote Link to comment
dlandon Posted February 22, 2015 Share Posted February 22, 2015 It would be best for your Dockers to be on a cache drive and not a SNAP disk. Why did you choose to do that? Quote Link to comment
HellDiverUK Posted February 22, 2015 Share Posted February 22, 2015 Question: since updating to the new beta, after each reboot my docker containers are gone and I have to recreate them. The docker image itself is there, but in the GUI, there are no containers. I've tried it twice now. I install something (SAB let's say), runs great, no problems. I reboot, it's all gone. Any ideas? FWIW: the docker is on a SNAP disk. Previously it was under a user share on the SNAP disk (Applications), but after the upgrade I put it directly on the SNAP disk (/mnt/disk) Thanks! I think the snap disk isnt mounted to the system until well after Docker starts. At best you're in a race condition that previously you might have been winning but now you're losing. Just mount/share the SNAP drive, give it a few seconds then just start Docker. Everything springs in to life. Quote Link to comment
HellDiverUK Posted February 22, 2015 Share Posted February 22, 2015 OK, scratch my last post. Whiskey Tango Foxtrot? What is with the force preclear of a new drive? I know the drive I am adding is OK. I'm now looking at hours to preclear a drive that doesn't need precleared. No array. No server. So, back to the Whiskey Tango Foxtrot thing. NOT impressed. Why isn't there a "I know this drive is fine, just get on with it and add it to the array" option? How do you know the disk has been zeroed (not just fine) if you have not pre-cleared it first! If the disk is not correctly zeroed then you cannot just add it without invalidating parity. Avoiding the downtime of unRAID array zeroing I is one of the advantages of using the pre-clear script. EXACTLY. I've already precleared the disk on another unRAID instance. I KNOW the drive is fine. Besides, the disk has been running fine in my old server for over a year, and has been scanned every week by Stablebit Scanner. But the 'production' unRAID doesn't give me the option to just add the drive. Which sucks. Or am I missing something? Quote Link to comment
HellDiverUK Posted February 22, 2015 Share Posted February 22, 2015 OK? In what respect? I ask again - fine in what respect? You do realise that a new drive has to be guaranteed to have zeros written to every byte of the data area, otherwise it invalidates your parity? Thanks for your input. And the slightly condescending tone. It's very much appreciated. Quote Link to comment
ChaOConnor Posted February 22, 2015 Share Posted February 22, 2015 It would be best for your Dockers to be on a cache drive and not a SNAP disk. Why did you choose to do that? I was having that corruption issue in regards to the docker.img. That corruption was causing the unRaid system to not start the array if the docker img was on the cache file. By moving it to SNAP, that issue didn't occur, but now I've found myself with another work around. Hopefully with the new docker changes, that image corruption won't happen and putting it back on the cache drive won't be a problem. Thanks! Quote Link to comment
itimpi Posted February 22, 2015 Share Posted February 22, 2015 EXACTLY. I've already precleared the disk on another unRAID instance. I KNOW the drive is fine. Besides, the disk has been running fine in my old server for over a year, and has been scanned every week by Stablebit Scanner. Sounds as if something might have gone wrong with the pre-clear on the other server then as it appears that unRAID has not recognized a valid pre-clear signature on the drive. Are you using the latest pre-clear script (earlier versions did not work correctly on unRAID 6). Having said that I guess it is possible there is a bug in recognising the pre-clear signature in b14? Quote Link to comment
sparklyballs Posted February 22, 2015 Share Posted February 22, 2015 OK, scratch my last post. Whiskey Tango Foxtrot? What is with the force preclear of a new drive? I know the drive I am adding is OK. I'm now looking at hours to preclear a drive that doesn't need precleared. No array. No server. So, back to the Whiskey Tango Foxtrot thing. NOT impressed. Why isn't there a "I know this drive is fine, just get on with it and add it to the array" option? How do you know the disk has been zeroed (not just fine) if you have not pre-cleared it first! If the disk is not correctly zeroed then you cannot just add it without invalidating parity. Avoiding the downtime of unRAID array zeroing I is one of the advantages of using the pre-clear script. EXACTLY. I've already precleared the disk on another unRAID instance. I KNOW the drive is fine. Besides, the disk has been running fine in my old server for over a year, and has been scanned every week by Stablebit Scanner. But the 'production' unRAID doesn't give me the option to just add the drive. Which sucks. Or am I missing something? are you saying the drive was precleared and has no data on it, or was precleared and has data on it from being in the old server for a year ? because if it has data from the old server then it's not zeroed and will screw up parity, regardless of it being precleared or not at some past stage. Quote Link to comment
dlandon Posted February 22, 2015 Share Posted February 22, 2015 It would be best for your Dockers to be on a cache drive and not a SNAP disk. Why did you choose to do that? I was having that corruption issue in regards to the docker.img. That corruption was causing the unRaid system to not start the array if the docker img was on the cache file. By moving it to SNAP, that issue didn't occur, but now I've found myself with another work around. Hopefully with the new docker changes, that image corruption won't happen and putting it back on the cache drive won't be a problem. Thanks! That makes sense. Yes, the cache drive is best. Quote Link to comment
jonp Posted February 22, 2015 Share Posted February 22, 2015 Is there a way around the need to re-create the docker image? I don't like doing it. If I backup the cache drive and format it to XFS then copy back all stuff and then upgrade to beta 14, would that work? Not that I know of. The format of the virtual disk within the new image file is fundamentally different and fixes a serious bug. However recreating this image should be explained as currently their is no procedure? Sure, I'll work up a guide with screenshots. Probably won't get posted till much later today. The short version is to delete the existing image using the UI and then recreate it by starting the service again. Then click "add container" and for each app you previously had installed, select it from the My Templates section of the Template drop down and click create. App images should automatically redownload as a result. Quote Link to comment
dikkiedirk Posted February 22, 2015 Share Posted February 22, 2015 Is there a way around the need to re-create the docker image? I don't like doing it. If I backup the cache drive and format it to XFS then copy back all stuff and then upgrade to beta 14, would that work? Not that I know of. The format of the virtual disk within the new image file is fundamentally different and fixes a serious bug. However recreating this image should be explained as currently their is no procedure? Would be nice to hear what the easiest approach is. Quote Link to comment
BRiT Posted February 22, 2015 Share Posted February 22, 2015 It would be best for your Dockers to be on a cache drive and not a SNAP disk. Why did you choose to do that? I was having that corruption issue in regards to the docker.img. That corruption was causing the unRaid system to not start the array if the docker img was on the cache file. By moving it to SNAP, that issue didn't occur, but now I've found myself with another work around. Hopefully with the new docker changes, that image corruption won't happen and putting it back on the cache drive won't be a problem. Thanks! I havent had corruption issues since converting my cache drive to XFS. Quote Link to comment
ChaOConnor Posted February 22, 2015 Share Posted February 22, 2015 Mine is still BTFS. Need to convert. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.