Jump to content

gundamguy

Members
  • Posts

    755
  • Joined

  • Last visited

Posts posted by gundamguy

  1. The fact a share is cache-only does not necessarily mean it's not on 'protected' storage.  If you have 2-or-more-device cache pool, then your cache-only share is protected and gets a green ball.

    Also, it is NOT in any way an "error" condition ... it's simply an informative status indicating some data isn't fault-tolerant.    For cache-only shares without a pool, that's always going to be the case;  for cached shares, that will automatically change when they're moved to the array.

     

    I generally agree with what you've said but here are my comments.

     

    If the goal is to be informative perhaps don't use the same symbol that on a different tab means there are problems you need to fix. This requires people to understand that in one context this symbol is bad news, but in another context is nothing serious at all.

     

    I think cache pools are great, but there are legit reasons (limited drive bays, limited ports, need for all of your array slots) why people won't want or be able to do cache pools some times.

     

    An ideal build would have a cache pool IMO.

  2. It means "WARNING: if the wrong device Fails you may lose data."  Thought of in that context, the symbol is appropriate.

     

    I agree with you in that context.

     

    I'm not sure this needs to be a persistant warning when it comes to Cache-only shares, because ostensbily you are using your cache only share for plugins/docker apps and it's only housing data you are ok with losing. I agree with Gary that this shouldn't exactly be a Green Cricle either, as it's not the same condition.

  3. ...  If things are working as intended it shouldn't really be showing an "error" condition IMO.

     

    Agree ==> HOWEVER, the yellow triangle is NOT an "error" condition.  It simply means "Some or all files are on unprotected storage."    In the case of a cache-only share being stored on an unprotected cache drive that is the normal condition ... NOT an error.

     

    The user should certainly KNOW if he's storing data on a cache-only share ... and this indicator simply reminds him that the share is unprotected.    Don't like looking at it?  ... update the cache to a pool so it's protected  :)

     

    I hear you and agree files on the cache are not inherently an error condition.

     

    However most of us have been conditioned to see yellow triangles as signifiers of that there is an non-critical issue that should be looked at.

     

    If this is normal behavior and performing as intended I don't understand why it's shown with a symbol that we are conditioned to believe signifies that something non-critical needs investigating.

     

    Right now it basically says "Caution! Everything is working properly!"

     

    What I would like to see is a better system which only shows a warning symbol if something went wrong, like mover failed or files were not moved properly according to the mover schedule. Perhaps even something that said the time of the last move, and time till next move? I don't know it seems like something that can be improved to give users better information so they know if they need to act on something or not.

     

    • Upvote 1
  4. I hate to do this, ??? but.... I'm still not convinced this is actually fixed (for me anyway)...

    ...

     

    Me too. Not that big issue, but... I'm on RC5 and using an AMD A6 dual core cpu.

     

    Even when system is totally idle (no disk spun up...) I ever got one of the two cores at max speed (3900 MHz in my case) since what reported by WebGUI.

     

    The most interesting part in that is the two cores alternate continuosly each other to be at max speed in turn.  ???

     

    This behavior has been observed since RC3 (first one for me) and I have never seen nothing similar in Windows OS.

     

    Is there some settings I could try too (not being on Intel...)?

     

    I wouldn't trust the WebGUI report to be accurate, the best way to determine if your CPU isn't stepping down.

     

    Is going to command line and running.

     

    cat /proc/cpuinfo | grep MHz

     

    A couple of times at differnt times and seeing what it reports, both with the WebGUI open and with it closed... It's possible you have this issue but it might not be this problem exactly.

  5. Any plugins available for unraid 5 that allows us to backup a specific set of folders daily?

     

    I think there is a plugin for crashplan floating out there some where (try searching the plugin design forum), but I and some other people here used the following tutorial to create custom scripts that can run a daily (or Hourly, Weekly, Monthly, etc.) backup.

  6. You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

     

    The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

     

    If you've got space issues I think the second path makes more sense...

     

    Hmm...  Maybe I'm missing something.  I was not planning on rebuilding a drive from parity.  My thought was to empty out one drive by moving it's contents to another existing drive, remove the empty drive and replace it with a larger drive, pre-clear the new larger drive, format it with XFS, and then move data to it from another existing drive.  Once that is done, I would continue a similar process until the data has been migrated to drives formatted with XFS.  In particular here is the layout:

     

    Current state:                                  Future State:

     

    Parity:  2 TB                          (keep drive)                Parity: 2 TB         

    Disk1: 1 TB RFS (60% full)    (replace drive)            Disk1: 2 TB XFS

    Disk2: 1 TB RFS (40% full)    (replace drive)            Disk2: 2 TB XFS

    Disk3: 2 TB RFS (20% full)    (keep drive)                Disk3: 2 TB XFS

     

    My thought process is as follows:

    Move all data from Disk1 to Disk3, remove current Disk1. 

    Install replacement for Disk 1

    Move all data from Disk2 to new Disk 1, remove Disk2.

    Install replacement for Disk 2

    Move all data from Disk3 to new Disk 2, re-format Disk3.

    Move some data from Disks 1 and 2 to Disk 3 to even out the disks somewhat.

     

    Does that make sense?

     

    Thanks,

     

    John

     

    Your process does make sense, and will work, but as Trurl explained when you remove a data disk from the array you are going to invalidate parity and be prompted to rebuild your parity. So you might have more downtime as you do parity rebuilds during this process. Also since you plan to replace two disks that means you'll be prompted to rebuild parity twice. Not sure if there is a better way to sequence this, but you can view this as two seperate tasks, 1) Converting from RFS to XFS 2) Replacing to hard drives.

     

    If I were trying to do what you want to do, I would first replace the drives and then convert from XFS to RFS, but you can sequence this as well so that you replace drives and convert in alternating steps.

     

    Hello all, I am currently in the process of moving my old Unraid 5 server to a new Unraid 6 build.  Key facts are I don't have a single spare drive that is equal to the largest drive in the old array (I simply have a couple of external 1TB drives) and I want to make use (of course) of XFS on the Unraid 6 server.

     

    Is it possible for me to take my 3TB parity out of the old server and transition that to the new server, whilst still being able to access the old server and copy data to the external drives (array will be unprotected)?

    Once I have cleared off a 2TB data drive from the old server, and place it in to the new one, do I need to pre-clear this again, or just re-format from the GUI as XFS?

     

    Many thanks for the advice. :)

     

    You can run without a parity disk for awhile and still have access to your data on array data drives, if you want to.

     

    I'm not sure that pre-clearning has much value for you here since the disk has already been stress tested before. I'm not an expert on pre-clearning but I would think that you should be fine just re-formating it. However again I am not the best on pre-clearing so there might be some benifit I am missing here.

     

     

  7. 10 - Now is the good time to move the files in the "t" directory to the root on [dest]. I do this with cut and paste from Windows explorer.

     

    11 - Stop the array (no need to delete anything from the [source])

     

    12 - Go back to step 2.  Note that this isn't a race - you can do it at your leisure of the course of days, weeks, or months. I do one or two a week or so.

     

    Greetings,

     

    I've been reading and re-reading this thread over the last couple of weeks.  I have two questions:

     

    1) Before I can add a new, larger drive, I need to move data around so that I can remove a drive.  I figure it's about about 600 GB of data.  Would it be best to shut down the array to move that data?  I assume in that case I'm safe to just telnet to the unRAID server and use the following rsync commands to move the data into the existing hierarchy, without having to create a temporary subdirectory that isn't part of the user share, correct?

     

    rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

     

    At that point, I should be able to start following the guide, I think. 

     

    2) Where does pre-clear of the new drive get done?  Is it before the steps in bjp999's guide?  It is mentioned, but it is above the steps, so i wasn't sure.

     

    3) Also - in the steps above, as the array is running, are you at risk of duplicate files, and confusing things between steps 10 and 11?

     

    Thanks,

     

    John

     

    You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

     

    The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

     

    If you've got space issues I think the second path makes more sense...

     

    Answers to questions:

     

    1) If you've got space on the destination disk for the files on the source disk that should work just fine.

    2) This depends on how you want to / are able to do this process. You might be better off, pre-clearning your disk, replacing the smaller disk, having it rebuild, then start the conversion process to XFS. In that case you'll have wanted to pre-clear before moving any data.

    3) You should be fine at step 10 / 11 since you will have already formated the disk you moved the data off of.

  8. So does anyone know why the command: "rsync -nrcv /mnt/disk3 /mnt/disk14/t >/boot/verify_disk14.txt" doesn't work for me?

    :o

     

    I don't know for sure, but I do know that trailing / are important to rsync. Perhaps it's a missmatch issue, where you did /mnt/disk3/ the first time and /mnt/disk3 the second time.

     

    I'd run it again like

    rsync -nrcv /mnt/disk3/ /mnt/disk14/t >/boot/verify_disk14.txt

    see if that comes back with an empty list (since the files are already in disk14/t)

    Thanks, I will try that.  :)

     

    EDIT:It worked, the file ended up containing:

    sending incremental file list
    
    sent 382,411 bytes  received 813 bytes  6.83 bytes/sec
    total size is 3,928,060,848,066  speedup is 10,250,038.75 (DRY RUN)

     

    That is great news. Glad to hear it!

  9. So does anyone know why the command: "rsync -nrcv /mnt/disk3 /mnt/disk14/t >/boot/verify_disk14.txt" doesn't work for me?

    :o

     

    I don't know for sure, but I do know that trailing / are important to rsync. Perhaps it's a missmatch issue, where you did /mnt/disk3/ the first time and /mnt/disk3 the second time.

     

    I'd run it again like

    rsync -nrcv /mnt/disk3/ /mnt/disk14/t >/boot/verify_disk14.txt

    see if that comes back with an empty list (since the files are already in disk14/t)

     

     

  10. Any ideas why windows thinks the files are still there.

     

    I want to format the drive and continue.

     

    Thanks

     

    I think (not tested the -R use of ls before myself) you can check if the files are actually still there using the following command.

     

    ls -R /mnt/disk4

     

    ls lists the contents of a directory, the -R option does it recursively, and /mnt/disk4 is the disk I assume you want to check, if it's not disk 4 substitute for the disk number it is...

     

    I suspect this will only return the directories you had but no files.

     

    As to why Windows would think your files are still on disk4... well it doesn't.

     

    When trurl asked you to

    v /mnt/user

    the results that you posted show that you have a user share "mnt/user/disk4" that appears to be a disk share but is actually a user share. Because disk4 is actually a user share and user shares aggregate across all disks (unless the share is set to not do that) it's showing the data on the other disks when you look at the disk4 folder in Windows.

     

    Unless creating a user share named disk4 was intentional I suggest you change that.

     

  11. The files are still there and I can play them.

    I realize that delete doesn't do anything except flag the file as deleted.

    However the file is marked by remove-source-files, unraid sees it as deleted thus the 44MB usage but windows sees 2TB of media files.

     

    Additional thought, are you looking at a disk share or a user share?

     

    If you are looking at a disk share the share should be called diskX.

     

    User shares by default aggragate files and directories from accross all the disks in the array (unless set differently) so even though you moved files from DiskX to DiskY they are on DiskY so they show up in the user share.

     

    If you look at the DiskX share (by sharing that disk under the disk settings) it only shows what's on DiskX and should show empty folders.

     

    If this isn't it, and those files really are still there I am really confused by what's going on, though I don't know that it's a huge problem.

  12. Quick question.

     

    Have managed to copy a drive using rsync.

    I wanted to have a quick look to make sure everything is gone.

    I shared the drive and looked at the share in windows, sorry just a novice linux guy,  and everything is still there

    The main tab in unraid says I only have 44MB used so unraid believes everything is gone.

    How do I verify before I format the drive?

     

    Thanks

     

    Did you dig down below the top level into folders where the data you wanted to copy was stored? I think --remove-source does not delete folders just files, so you should have a bunch of empty folders left over. I suspect that it only appeard that the data was still there because the (now empty) folders were still there.

     

    If that's not it we can try something else to verify.

  13. I am moving from Unraid 5 to 6 as well, and have very limited hard drive space to use as temporary storage.  Unfortunately I don't have a drive equal to the largest one in the array (2TB).  I only have 2 x 1TB external drives.  Is it possible to do an rsync command that will only do 1TB worth of data, or will I have to manually split and copy the directories over?  Is this the best way forward given my situation?

     

    I want to start from fresh in 6.  Once data is copied off a drive and put in to the new array, I will reformat as XFS.  What's the best way to build the new server, should I add drives one-by-one or add them all and then calculate parity?

     

    Thanks guys.

     

    I think in that cause you have to manually split and copy the directories over. I do not think there is anyway to specify to rsync to only copy a set amount of data.

  14. It is worth pointing out why this topic is named "Anti theft" and why potentially some user will want a second feature.

     

    The idea is that if someone steals your unRAID that they cant plug it in and access your data.

     

    How exctly is that accomplished, and would that remove one of the best features of unRAID, namely you can take a drive out of your unRAID array and plug it into a linux distro and get the data off should your hardware fail?

     

    I'm just having nightmares again about the day my PS3 died. You might not know this but the PS3 has a built in hard drive which is replacable by individuals. So clearly when the PS3 hardware fails you just pull out the PS3 hard drive plug it into a new PS3 and get on with your life right.... WRONG, each hard drive was individually formated with an onboard chip to only be readable by that particluar PS3... so when the hardware fails even though the hard drive is good and the data is there you can't recover it... (Good job Sony!)

     

    Anyway, I'm only mentioning this becuase this is a concern of mine, not becuase I beleive that this concern will be realized.

     

     

     

  15. Just doing some reading here in preparation for my ultimate migration to v6.  I must say, I'm surprised there's not a simpler method.  Not that this is overly complicated, but there appears to be different schools of thought/little consensus/lots to keep in mind.

     

    For me, I'm  wondering how to handle user shares that span multiple disks.  For example, if I'm copying from disk1 (RFS) to disk16 (XFS), do I later reassign what was disk16 to disk1, preserving my user share config, or do I keep records of all the changes, and ultimately make that change to the user share(s)?  If a particular user share spans 6 disks, do I just delete that user share until I'm done migrating all those disks to XFS, then create it again?  Or am I changing the config of the included disks in that user share 6 times?

     

    Is any of this even worth the hassle?

     

    Thanks!

     

    I believe the way that user shares work this isn't truely an issue. If you copy disk1 to disk16 data that was stored under /mnt/disk1/Movies and is now at /mnt/disk16/Movies will still show up under /mnt/user/Movies. I think this is true even if say under Movies share settings you exclude disk16. The exclude include options are for controlling what disks data is writen to, and doesn't affect the propagation of that data to /user/share or at least that is my understanding.

     

    That said if you are using the exlcude / include options for your shares, you will need to go back and change which disk you are excluding / including to ensure that any new data added to the array goes to the right disk.

  16. I followed this thread quite closely. Before even starting the RFS to XFS copy, I set move to a monthly schedule, to be invoked next in 30 days. I also shut down docker. I have no backups going into unraid and no plug-ins installed. I am using Beta 13 (did not run into the cache format bug).

     

    I was able to get about 2TB of a 3TB RFS drive copied to a 3TB drive using screen and this command: "rsync -av --progress --remove-source-files /mnt/disk1/ /mnt/disk3/"

     

    But then the process stopped, and all disks stopped spinning for more than an hour. So I shut down the screen session and started again using the same command. It never restarted and the screen is now stuck at

     

    sending incremental file list

    ./

     

    That's what the screen has looked like for a couple of hours.

     

    How to finish?

     

    I'm not 100% sure why rsync is having trouble, but here's a few things you can try. You can add -n to do a dry run this will tell you what needs to be transfered still. You can also make rsync more verbose which should help you troubleshoot it. -vvv will give you more information (too much IMO, but might help you figure out what is causing rsync to hang)

     

     

     

  17. Peter - I suspect that one core is held at (or near) full speed when you are observing the UI because there are polling tasks from the UI requesting status info and that is enough to hold one thread active. This is speculation but seems plausible.

     

    Yes, I did wonder whether the simple act of displaying the info was sufficient to run one core at full speed - the observer effect.

     

    Maybe but that's an awful lot of processing power for what ought to be a rather simple task. I think this might signify some sort of issue with Ivy Bridge as well, but perhaps not the same exact issue.

  18. The behavior is different. It scales down but for short periods and quickly jumps to the max. frequency on all cores.

     

    I have "Intel® Core™ i3-4130T CPU @ 2.90GHz"

     

    I am having the same behavior, it seems to be cycling. It's better then it was in b12 (prior to editing syslinux) but it's not as good as the fix editing syslinux.

  19. So how do I change the filesystem for a drive that is already in the array?  Do I have to remove it from the array and put it back in?  Just a little hesitant as I've not done this before.

     

    thanks

    david

     

    This is a drive that you've copied teh data from and are ready to format correct. If not do not format.

     

    Here is what I did on 6b12.

     

    1) Verify that your disk is empty and make sure you know which disk you want to format.

    2) Stop the array

    3) On list of disks tab, click on the disk name (Disk1, Disk 2, etc..) (Make sure you picked the right one.)

    4) On that Disk's settings page there should be a drop down for formating, select xfs

    5) Start the array

    6) Hit format (Which will appear below where you start the array)

    7) Wait a bit, and it should be formated and part of the array.

     

  20. Will this prevent corruption if a drive red balls during the copy?

     

    I used rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ to convert my disks to XFS and one of them red balled during rsync, in my case it corrupted one file.

     

    That is intresting behavior that I would not have expected. I am suprised it kept a transfered file that it didn't verify, which the documentation suggested would not happen.

    I think the best course of action is to run two passes then.

     

    rsync -av --progress /mnt/diskX/ /mnt/diskY/

    This should transfer all the data.

    rsync -avc --progress -remove-source-files /mnt/diskX/ /mnt/diskY/

    This second pass should not transfer any data, but it'll check the checksums of files at both locations, and then delete the source file if they match. This will take longer and is more IO intensive since it has to generate checksums.

     

    Of coruse not you have the file duplication issue for a little while.

     

     

  21. I just finished my process of converting from rfs to xfs last night using your described process and rsync to move the data. I had no problems.

     

    I do want to say after a bit more reading about rsync I added to my prior post to add a note about not using --remove-source-files if there are active writes to the directories your moving.

    I never thought that corruption was possible.  I would have thought rsync would be smart enough to know not to copy / remove the source file if that file was currently being written to.

     

    When I did this,  I disabled mover and my backup program while I was converting.  I just didn't want any files to be possibly fragmented to death by multiple writes to the drive at the same time.

     

    I would have thought so too, but the documentation says to be warry of this. This might be a disqualifier and why you shouldn't use rsync as I originally suggested.

     

    Perhaps a two step process is in order. First

    rsync -avP /mnt/diskX/ /mnt/diskY/

    then

    rsync -avP --remove-source-files /mnt/diskX/ /mnt/diskY/

    You could also throw in a checksum comparison on the second one if your concenred by sending rsync the -c attribute.  Also I'm pretty sure that -P is the same as --progress.

     

     

×
×
  • Create New...