JonathanM

Moderators
  • Posts

    16148
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. Long story short, I need to swap a data disk.  What's the easiest/safest way in unRAID6?

     

    The existing drive is working fine, and parity is running and synced.

    itimpi gave you the easiest and quickest way. Safest would be to preclear the new drive(s) and add them to the array if you have the slots available. Parity is maintained the entire time during an add operation, so recovery from one of your existing data drives failing is easier. After the drives are added, rsync the data with verification from disk to disk, then after all data is duplicated, pull all the old drives and rebuild parity with a new config on only the new drives. Parity is maintained the entire operation until the new config, at which point you have verified copies of all the data anyway on the old disks. Slightly longer and more complicated, but safer. Also requires extra drive slots to be available, which may be an issue.
  2. Posted from another thread, Tom definitely has this feature in mind, at least the part about bringing snap into the limetech supported feature list.

     

    2. The cache pool - this is one or more devices organized as a btrfs "raid1" pool.  There's lots of information out there on btrfs vs. zfs.  No doubt zfs is a more mature file system, but the linux community appears highly motivated (especially lately) to make this file system absolutely robust, and most would say it's destined to be the file system of choice for linux moving forward.

     

    Like data disks, the cache disk (single device pool) or cache pool can be exported on the network.  At this time we export "all or nothing" but there are plans to let you create subvolumes and export those individually as well.

     

    The cache disk/pool also supports a unique feature: we are able to "cache" creation of new objects there, and then later move them off cache storage and onto the array.  The main purpose for doing this is to speed up write performance when you need it: at the time new files are being written to the server.

     

    3. Ad hoc devices - these are devices not in the array or pool.  Sometimes they are referred to as "snap" devices (shared non-array partition).  Officially we don't support the use of snap devices but people do make use of them.  Eventually we will formalize this storage type though, especially for use by virtual machines.

     

    OP, Can you expand more on the use case and how you see this part of your request working?

    if you can have one cache pool, why not two, or three, sync'd to different folders obviously.

    Why do you want to subdivide the cache pool?

  3. Just to clarify, multiple cache pools as the traditional "cache" doesn't really compute, the original intent of the cache drive was to provide a temporary fast write location for data intended to end up on the array.

     

    What you are asking for exists already as a third party plugin, snap. Alternatively, it can be done at the command line and scripting. I think this feature request would be better stated as a need for a limetech supported and maintained app drive(s) or pool(s), that wouldn't directly participate in array writes. Now that we have baked in virtualization and more app support, I think it's a good idea to separate the idea of cache and apps, possibly even with a backup routine. It would be nice to have all that supported in the main gui.

  4. No, right now the upgrade is a manual one, but I likely will link the v1 updater to the v2 plugins soon so it's a one button update for everyone.

     

    The v2 plugins exist on a different github folder also, more consolidated that way

    Before you do that, would it be possible to code the update button so it excludes those of us still on earlier 6.0 beta versions? I think a version check with exclusions for 6.0 beta 1-10 would be sufficient.

     

    I'm perfectly ok with you stopping support for 1.x plugins, I can update software versions myself if necessary, but please don't force the update.

     

    Unless the landscape changes drastically, I'm planning on sticking with 6b6 until 6.0 goes final.

  5. it's extremely annoying that the recommendation I'm given is to update my password in every single location that I use google services b/c unraid can't handle a '#' sign... 
    I agree with your premise, however I think you may be tilting at windmills. I would recommend setting up a disposable gmail account with a password like unraidssmptpasswordhandlingsucks or something like that, and use the forwarding and filtering rules to accomplish what you need.
  6. Parity is the keystone to unRAID, and with a potentially failing drive you're really asking for trouble.

    Logically, since if I can lose any single drive and recover, but any 2 or more drives and I lose the data on all dead drives, I would prefer to have the parity drive fail, because I haven't lost any data on it. If you lose 2 data drives and the parity is still ok, you gain nothing because you still have nothing from the 2 data drives. Unraid requires ALL remaining drives to be healthy to recover from 1 failure.

     

    I would argue the parity is the least important of the drives for the sake of data redundancy, and the most important for performance, as each write must touch the parity drive, but reads only involve each individual data drive.

  7. This indeed sounds promising, but I am silently wondering how you can be sure it was 14a that solved the issue on your test system if you were actually unsure previously how you reproduced the issue with 14? :)

    Not so silently wondering. :)

    What changed from 14 to 14a that fixed the issue? If you don't know, how will you be able to keep the problem from coming back in future releases?

  8. When you have a few hours and the server will be unused, you can Click on the Run Disk Selftest and select long test (which will take a long time i.e. hours.).

    When it's complete click on the Disk self-test log and Disk Error-log (if any) and post the results.

    As an aside, has the coding been changed to keep a long test from being interrupted by a spindown? Since the test is now part of the main interface and not an addon, it would probably be fairly trivial to change.
  9. Just switched from Windows Server to Unraid.  Dashboard showing my PARITY drive is Faulty?  but it has been working all these years no problem,  what does this mean?

    Short answer, parity has not been fully created or checked yet, so you aren't protected against a drive failure yet.

     

    The red smart indicators may show that both the parity drive and disk 1 have issues, but until you examine the smart reports and possibly post them here for evaluation there is nothing that can be said definitively about their condition.

     

    Lastly, just because a drive has been working for years doesn't mean it isn't about to fail. All hard drives eventually fail, the important thing is to accurately predict when.

  10. so now, my question is: why is this disk readable in a new unraid server, but not on any linux computer? I want to make sure that my other disks would be readable outside of the array, this is supposed to be a feature of unraid, and in my opinion, an important one!

    Did you mount it internally to the other computer, or use a USB connection? Some USB adapters munge existing data by doing some internal translation in the SATA-USB controller and require a fresh format to work, or they just plain don't work correctly with drives over a certain size. Also, not all linux distributions include reiserfs support out of the box, it has to be downloaded and configured with the applicable package manager.
  11. I was able to do this by setting up my modem to be a dumb switch and my Netgear router to do the PPOE dialing and then port forwarding gets easy as well, since you have to forward the port on your router only.

     

    Now only need to figure out on how i can add some security to the server to prevent intrusion.

    You have the cart before the horse. Opening up unraid and then trying to secure it after the fact pretty much guarantees a hacked network.
  12. What does it mean in unmenu when its says installed but not downloaded?  Does that mean unraid has it and unmenu is just seeing it?

    Yes. There is a line in the package.conf file that describes what file and possibly version to match, and if it matches, it shows installed.

    In your specific example, these are the relevant lines. The package file doesn't exist in the package folder on your flash drive, so it shows not downloaded. The package installed matches what is in the /usr/lib folder already.

    PACKAGE_URL http://slackware.cs.utah.edu/pub/slackware/slackware-12.1/slackware/a/cxxlibs-6.0.9-i486-1.tgz
    PACKAGE_FILE cxxlibs-6.0.9-i486-4.tgz
    PACKAGE_INSTALLED /usr/lib/libstdc++.so.6
    PACKAGE_MD5 ad8c0c5789581a947fd0a387c2f5be8a
    PACKAGE_DEPENDENCIES none
    PACKAGE_INSTALLATION test -f /usr/lib/libstdc++.so.6 && echo "/usr/lib/libstdc++.so.6 already exists. Package not installed."
    PACKAGE_INSTALLATION test ! -f /usr/lib/libstdc++.so.6 && installpkg cxxlibs-6.0.9-i486-4.tgz
    PACKAGE_VERSION_TEST ls --time-style=long-iso -l /usr/lib/libstdc++.so.6 | awk '{print $10}'
    PACKAGE_VERSION_STRING libstdc++.so.6.0.9
    

  13. Hello, so i set up the tower from my dektop PC which is wired tot the server, now im on my laptop which is wifi and i cannot login from it, meaning i can log into the tower from the website only if i login from my PC first ?

    What are the IP addresses of the unraid box, your desktop PC, and your laptop? Are you connecting to a guest wifi on your router?
  14. well crap.

     

    It was riserfs and was redballed.

    I removed the drive, installed new precleared disk.  Formatted (xfs) and started the rebuild.

    reiserfs is extremely resilient, you may still be able to recover the data from the new disk after the rebuild is complete. DO NOT WRITE ANYTHING TO DISK1.

    Rough outline of what needs to happen to start to try to recover disk 1's data.

    1. Finish rebuild

    2. Restart array in maintenance mode

    3. Run reiserfsck --check /dev/md1

    4. Post output of that command here on the forum and solicit further advice

  15. So, I haven't had a failed drive a year or two... rebooted my unraid and disk1 won't come back to life.

     

    Swap in a brand new disk that I had precleared, set as XFS and it started to rebuild.

     

    Should I see data on the /mnt/disk1 as it's rebuilding?  When I telnet into unraid and look at /mnt/disk1 there is nothing there.

     

    thanks

    Please list exactly what you did. What format was disk1 before it failed? Rebuilding puts back the original file system exactly as it was, you can't switch file systems with the data intact. If you formatted the new drive, you erased it.

    http://lime-technology.com/forum/index.php?topic=37490.0

  16. Thanks will do, just waiting for my key before I can.

    Actually, you could use the free version and just assign a couple drives each time. Just be very careful to NEVER populate the parity slot. You could take the opportunity to browse the disks and label them appropriately, then when your key arrives you will be ready to assign everything at once.