raqisasim

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

raqisasim's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Upgraded Unraid to latest, not anything else. No drives have been moved in any way, yet.
  2. Hi. I have to apologize on multiple counts. I upgraded, shut down the server, and left home to spend time with family, so won't be able to add the diag file until I get back Monday. If you want to ignore the rest of this reply until I can add the file, I understand. I say this because I clearly am causing frustrations I do not mean to, by not explaining fully and using incorrect terms instead of fully documenting my idea. Again, my apologies for that, and I will ensure I upload the requested diagnostics ASAP upon my return to my home. That said, I ask for your patience as I attempt to clarify my situation and goals: 1) I have an Array with 1 8TB parity drive, currently, and a mix of 4 and 8 TB drives, a mix of SAS and SATA, for the data drives. 2) I have 3 8TB SAS drives, not installed in anything currently, that I would like to use in this Array, in the following ways: 2a) Use 2 of the 8TB drives to replace 2 of my existing 4TB drives, and 2b) Use the last 8TB drive to become a 2nd parity drive; it will not replace any existing drive 3) I have one more SAS connector for a drive, so cannot connect two more SAS drives to the server/Array. It will, obviously, need to become the connector for the 2nd Parity. To hopefully clarify why I referred to shrinking the array for the above process -- as I said in my original post and above, I used that "shrink array" process, from that wiki page of "shrinking the array," to do this a few years ago, successfully. That time, I "shrank" the Array by three 2TB SAS data drives, physically removing said drives, then adding three 4TB SAS drives into the same Array. I did it in this fashion because that wiki page says explicitly that "This method is best if you are removing more than one drive" which is my plan, per above, and also given connection limitations. That is why I linked to that page, and talked about shrinking the array. The idea I have this time is to do 2a) above by "shrinking" out the two 4TB SAS drives currently attached and in the Array, physically removing the drives, and then adding the 2 new 8TB drives in. Along with, of course, adding a 3rd 8TB drive for a 2nd Parity, as I mention above. And I'm trying to ask if there's a better approach that using that wiki page, esp. given the age of that wiki page and the goal to add the 2nd Parity. Given that your guidance leverages using "New Config" once the new drives are added, which aligns with Step 4 in the wiki documentation, I suspect, at the end of the day, it is still the right procedure, and this confusion was due to my clumsy words more than anything technical. I hope this clarifies my goals w/o adding more frustration to the discussion. Thank you, all, for your time and attention, and again I shall add the diagnostics as soon as possible.
  3. New parity drive is same size as current, so it sounds like I'm just going to add everything and then rebuild, and that's fair enough. Thanks for the information!
  4. Yep -- sorry, I should have mentioned I clocked that in my research and it's part of why I will upgrade before going further.
  5. Hi all! I've had 3 8TB drives sitting around for a while, and I would like to finally integrate them into my UnRaid Array as safely as possible, while not taking more steps than needed. Since I'm about out of SAS ports (I have onboard+card) I'd rather pull 2 of my 4TB drives and replace with the 8TB, and add the 3rd 8TB as a 2nd parity. I've done the process at https://wiki.unraid.net/Shrink_array#For_unRAID_v6.2_and_later before, years ago. I've already used unBalance to move data off my designated drives for the swap. So, my questions are: Is that "shrink array" process above still the best for swapping out data drives? Is there a good process for doing the above and adding 2nd parity drive at same time? Or do I need to do them one at a time? If I do need to add 2nd parity separate from swapping data drives, should I do the 2nd Parity first? I am OK with taking my time to do it "right," and just wanted to know if there's a better way that I'm just not aware of. Thanks for the time! NOTE: Yes, a few revisions behind; unBalance has, of course, taken time to move the files and before that I got busy with work. I plan to upgrade to latest before doing anything further.
  6. Hi! As I mentioned, my ability to support debugging is limited, esp. if you're 100% certain you followed all the steps exactly. The only thing I can recommend is to check your workers GUI and see if there's even workers to spin up, as well as confirm if you asked for said workers to get spun up per the docker parameters.
  7. First -- I'm no expert. I'm just a newbie around this, too! I cannot make any promises I can support deep debugging. But always try looking at logs, first. Each Docker container has a log you can look at for troubleshooting. Second, I noted in Step 7 a point where I did fail to see the GUI, so I'd triple-check that Elasticsearch actually is up and functional, as well as carefully re-check the other steps. If that doesn't work, re-start from ground zero (none of the relevant Dockers installed, appdata for those Dockers completely deleted as per Step 2) to ensure you've got the right config setup from jump. In short, this is all a bit complex, and in fact since I also haven't gotten re-scanning to work per Surgikill's comment above I've set it all aside for now, myself. But hopefully this'll help you!
  8. Offering to backup Flash prior to upgrade is a really good idea! And in fact I forget that's another way to get a flash backup; just did it
  9. Pretty easy, it turns out. I had already made a tarball of my config folder. So, when the new USB stick got created and confirmed to boot, I renamed it's config folder to config.old and copied over my original one from that tarball. After that, I put in the request for a new key and started the Array. So far, looks good; just tried Dockers like Plex and my backup app successfully. One note is that I did download 6.7.2 onto this new USB stick. I'll re-try upgrade to 6.8 once I have a chance to backup the new stick via the backup plugin. If I was to summarize things to check around these issues: Run a through disk check against your current USB sitck See if current USB stick has "FSCK" files; if so, it's likely failing If trying a new stick, Check your anti-virus/anti-malware apps to ensure it won't impact making MBR changes required for the new stick to boot And in general, Backup your USB Drive with the CA Backup / Restore Appdata plugin, just in case the USB has truly failed.
  10. Quick note: above steps from johnnie.black have worked to create a new, bootable USB stack, so thank you! For anyone else running into the USB Stick won't boot issue: The issue, for me, appears to be my desktop's Acronis Active Protection, which stops modifications of MBRs -- including on my USB stick. So check to see what anti-virus/malware software you're running. In more detail: For whatever reason, Acronis' pop-up never appeared when I ran make_bootable via double-click (which I did via "Run as Admin"). However, since to do johnnie.black's steps I already had a Admin Powershell open to run the diskpart commands, I used that same to run the unzip, and then make_bootable -- and at that run, I got the pop-up. to fix: I temp turned off Protection, reran make_bootable, and now I have a stick that starts! Next I will move over my backed-up config from my other USB stick, and see how it goes.
  11. Hi all -- I just ran the 6.7.2 --> 6.8 upgrade, and need some help. After the upgrade, I rebooted and got the following error: Kernel panic: not syncing: VFS: Unable to mount root fs on unknown-block(0,0) It's the same as in this post. I have the USB drive in my desktop where I've: already backed up recent diagnostic log (available if anyone thinks it's useful) , and am currently backing up the config folder just in case (I also have been running backups in my Unraid) So, 1st problem, how to get over above issue -- any ideas? EDIT: Also -- I do see a "FSCK0000.REC" file at root of USB drive, so corruption of some sort seems likely. I'll ask running a chkdsk to my TODOs on this. Second issue: Noticing in forum posts this error seems related to USB stick failure I downloaded the Unraid Flash Creator and tried creating for 6.8 with two different sticks (one unused metal USB 2.0, the other a new-ish 3.0) to boot my server. Both report back "this not a bootable disk". I've: made no changes to the BIOS, just confirmed it does see the sticks and has them at top of boot chain. Tried rebuilding with "Allow UEFI Boot" on, and Also tried the make_bootable.bat utility None of these allow me to boot my server. I used a USB 2.0 port to create, it should be noted, and I plug these into USB 2.0 ports on my server. So, 2nd problem, how do I boot any USB stick other than the one that may have failed? After this post and copying off config I'll try making the stick on my Linux laptop with the Creator, and manually, and with older versions of Unraid. Help?
  12. I followed the above and it worked a charm. Thanks for setting me straight!
  13. Hi folx -- looking for recommendations on my approach for swapping out four 2TB drives, with four 4TB drives. Details: I have four 2TB drives in my unRaid 6.7.2 system. They are fine, it's just I now have four 4TB drives I can replace them with, and someone to give these older drives to. Since I'm almost out of SAS ports and power connectors on my server, I'd like to replace, if possible, by the following steps. In anticipation, I have already moved all data off the 2TB drives via UNBalance; next steps would be to: remove the four 2TB drives from the array, re-start the array to get the parity to completely forget those drives, shut down the server and pull the four 2TB drives, install the 4TB ones, and start up the OS to put those 4TB drives back into the array, possibly with pre-clear (the 4TB are used) Does this sound reasonable? I based it off the "Replacing Multiple Data Drives with a Single Larger Drive" page in the wiki -- which is not what I'm doing; just gave me the key idea that the above should work. Thoughts?
  14. So, after reviewing this thread, I followed OFark's steps, and finally got it working! (i.e. Diskover GUI loads and workers are running against my disks) For those who are deeply confused, I wanted to expand on that worthy's notes, as there are a couple points where I got stuck, and suspect others have/are in similar situations: Check if you have the Redis, ElasticSearch, and/or Diskover Docker containers already installed If you do, note where your appdata folder is hosted for each, then remove the containers, then remove the appdata (removing the appdata is crucial! An older config file really tripped me up, at one point). You'll likely need to SSH or use the UnRAID Terminal in the GUI to cd to the appdata folder location and rm -rf them. This may not be required, however -- to ensure the OS setting for ElasticSearch was set before I installed, I followed the steps at this comment: that start at "You must follow these steps if using Elasticsearch 5 or above". IMPORTANT NOTE: I did NOT install this ElasticSearch container, just used the script instructions. For which container I did install, see Step 5, below. Install the Redis container via Community Apps. I used the copy of the official Redis Docker in jj9987's Repository (Author and Dockerhub are both "Redis"). It should not need any special config, unless you already have an container running on Port 6379. To install ElasticSearch version 5, go to https://github.com/OFark/docker-templates/ and follow the steps in the README on that page; it'll have you setup an additional Docker Repository first. Note that, so far as I can tell so far, I did not need to do anything in OFark's Step 6, on that page, around adding values in Advanced View. If that turns out to be a mistake, I'll update this. At this point, I recommend checking your Redis and ElasticSearch 5 container logs just to ensure there's no weird errors, or the like. If it all looks good, install the Diskover container via Community Apps. To avoid the issues I ran into, ensure you have all the ports open to use (edit if not), and that you provide the right IP address and ports for both your Redis and ElasticSearch 5 containers. Also make sure you provide RUN_ON_START=true, and set the Elasticsearch user and password -- if you don't give the latter, you'll get no Diskover GUI and be confused, like me Once Diskover starts, give it a minute or so, then go to it's Workers GUI (usually at port 4040). You should see a set of workers starting to run thru your disk and pull info. From there, you should be able to go to the Diskover main GUI and see some data, eventually! As I wrap this, so did my scan; now to pass thru some parameters to get duplication and tagging working (I hope!) I hope this helps -- Good luck, everyone! And thanks to Linuxserver for the Container, and OFark for templates and guidance that helped immensely in setting it up for UnRAID!