Jump to content

Acps

Members
  • Posts

    187
  • Joined

  • Last visited

Everything posted by Acps

  1. I wish but my array was 25tb, i thought by running double parity i was protecting my data. I wasnt expecting for me to f up the raid array but getting all the disks unassinged from their slots. I wish i had the internet to back it up to the cloud, but for me to upload 25tb at a speed of .75mb, would take roughly 300+ days, and no one would be able to use the internet during that time lol. I havent tried to start the array since they got unassigned. I havent even tryied to reboot anything. I thought that it was able to be put by togther, if done correctly, but i dont want to try anyting out of fear of the 1 mistake i could make tjat would cost me all the data. The cache pool data is gone and im fine with that, i backed up my appdata weekly, assuming i can recover the raid. I did put my hdd back in the same slots they were before. Now Disk2 which is sdg, is definitely bad, its been dead dead. its been like that for several weeks and i tried many times to try and rebuild the array with it to see if it was a fluke, but i 3-4 times it always ended up failing and being emulated data, but i had dual parity so i wasnt worried. Parirty2 sde which is that drive that looks unformated was the other drive i had an issue with. But it was a 1 time fluke because it hasnt had a single issue since it reported a few errors. Disk2 sdg Parity2 sde Looking the smart health tho its showing they both have errors: But i thinks that because they got unassigned and reassigned. Because i acknowledged them after i try and rebuild the array to see if itll occur again. Parity2 sde the one showing unformulated was confusing me too, but once i assinged it back into its slot in the array it says it was formated just fine. Know my parity wa s always valid as far as my knowledge, i cancelled a few parity syncs by hand. But i back up all those logs as well. My syslog assuming i do a clean shutdown always backs up, got 2 years of them so far lol. Heres the rest of syslogs from Nov 26, which is when this all occured! syslog-20191126-100643.txtsyslog-20191126-105428.txt So after all the pics and walls of text, i hope that clarifies what i did and hopefully, i just need to start the array and let the parity sync run. But i was worried because in the first pic i posted of the drives assinged but not mounted, the 2 raid arrows are contradicting each other. That red text saying id loose all my data didnt popup till after i assinged both parity and then assinged disk1. So im afraid to try it until i hear back from someone who has a much better understanding of this than me, so ty for the help so far and hopefully someone can confirm i can rebuild it just fine. If thats not the case. I would want to try and recover as much data as possible from the disks without trying to rebuild the array and possible write over the good data thats still there. ~Acps
  2. So ill go ahead and said it, i was clicking buttons i didnt understand lol. This is what I did: unraid-syslog-20191126-1931.zip unraid-diagnostics-20191126-1919.zip So 1 of my 5tb is definitely dead dead, but im comfortable holding out till next week to maximize cyber week. But what i was dicking around with was my cache pool. I did have 4 ssds in raid 0, 2x 250gb and 2x 128gb. I yanked one last nite for another pc that i needed access too, and i was overkill with this setup as 4 as is. I was having issues though rebuilding my btrfs cache pool with the 3 ssds i have left. Luckily i had my appdata folders backing up every week, so the only data loss im gonna miss from the cache drives would be whatever was still waiting to the mover schedule to kick in. But now im afraid to touch it because im fairly certain i havent totally bricked all my data yet because i thought if done correctly this is still salvagable, but 1 more fuckup and i could be dead in the water. Loosing roughly 20tb of data. alot of it being irreplaceable. I spent some time trying to figure it out on my own but im worried that i could misinterpret something and still ruin all the data. So i wanted to check in with the pros here that point me in the right direction to salvage my data. Thanks in advance for the input! ~Acps
  3. Just another observation, I reinstalled Windows 10, and firefox. And still cannot get pass the password login. Any suggestions?
  4. My array is only sitting at about 25tb, and almost 1tb for ssd cache pool. 90% of my data on my array is media, aka movies/tv shows for my plex server. And just about any piece of software for windows thats popular, as well as 100's of iso images of different windows 10 builds. I use P2P to download all my data on my terribly slow dsl speeds at 12Mbps/750Kbps. I dont delete anything and i prolly have several 100gbs of duplicate data lol. I only have 3 dockers installed, Plex, deluge and krusader. My server is severely under utilized, and i need to lookup and expand my dockers/plugins to take advantage of whats out there. My current struggle is I am using a unsupported sas controller, that drop disks often, causes read and write errors, and often needs to be rebooted to ensure performance doesnt struggle due to in adequate drivers that fail or disable for no reason. Also my i5 cpu is getting taxxed pretty hard by Plex transcoding. I installed at the start just a basic heatsink, but now that doesnt seem to be enough as I am getting a ton of CPU overheating errors and its being throttled down to prevent damage. My case is very well ventilated with x8 120mm and x4 140mm. But when all the drives are spun up doing work it gets faily warm. I wanna know what you guys need arrays that are well over 100tb, is it for enterprise type of setups? Cost has to be enormous lol.
  5. So i have had the issue before where i can no longer login to the deluge docker with the default password "deluge". I am using firefox as my web browser, it seems to work no problem after a fresh install of windows 10, but some weeks/months later it no longer lets me login. Ive tried reinstalling the docker. deleteing my docker image, using every other deluge docker on the app store, but they all end up with the same problem. A work around i found was to use Microsoft Edge browser, and itll let me login with the default "deluge" password with no issues, So its related to firefox somehow. I just tried uninstalling and reinstalling firefox but that didnt work either. Gonna post my logs, would appreciate any help trying to solve this. Thanks in advance! ~Acps unraid-diagnostics-20190913-1912.zip
  6. How does this card look for working straight outa the box without any issues with unraid? https://www.amazon.com/LSI-Logic-9207-8i-Controller-LSI00301/dp/B0085FT2JC
  7. Im running into another huge problem. Getting the error "Unable to write to disk5 - Drive mounted read-only or completely full." Is this due to the controller card or bad disk? unraid-diagnostics-20190913-1509.zip
  8. So are you saying there are easier cards to setup for unraid
  9. Will this work for my x16 slot and 8 hdd array? SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 https://www.amazon.com/dp/B002RL8I7M/ref=cm_sw_r_cp_tai_C6SEDbMWCJB0N Leaving the ssds on the motherboard sata controller? Sent from my iPad using Tapatalk
  10. Ugh, just when I thought i was heading in the right direction to try resolve some of my issues, I had both parity drives disable with many errors. Heres my diagnostics. What should i try and do now? Is this still related to the controller card causing issues or am i having drive issues now? unraid-diagnostics-20190912-0336.zip
  11. What about using my x16 slot for a controller card then, and try passing through onboard gpu instead? Dont think I absolutely need a dedidcated gpu for VMs anyways, more of something to tinker around with.
  12. Well, that doesn't surprise me, I have had a ton of issues since I started back in 2015 with my controller, I thought I had maybe dodge a bullet when I installed a replacement card and installed new sata port cables for it. A couple of questions for you: First being could you point me to a new card or a current list of supported controllers, I checked https://wiki.unraid.net/Hardware_Compatibility#PCI_SATA_Controllers ; but I wasnt sure how current it is, as my SASLP card is listed as a compatible card. Id like a card that will work in my x4 pci slot if I don't lose any performance over the x16. So that I can have a dedicated gpu installed later for VMs. Second, currently I have my 4 ssds installed using my onboard sata controller on my motherboard which I think is a Intel. Is it safe to leave them like that? Or do I need another sata controller card? Or do I need a card big enough to hold 8 hdds and 4 ssds? I think I will end up ditching 4 cache drives for 2 x500gb raid 0 pool instead here in the near future. Thanks in advance for the help again! ~Acps
  13. I noticed this a few weeks ago, however the dashboard in unRaid only reports the CPU temp in the 80-90 F range. Which hardly seems right, I did login to my motherboards bios and tweak my cpu fan setting to run at max 2000+ rpm 24/7. I do have x6 120mm and x2 140mm setup in my case for cooling, it is a smaller compact case with almost 8 hdds and 4 ssds, however i never do get any high temp alarms for any of the hdds. That being said I did purchase a water cooling unit for my i5, as well as a 850w PSU, I was getting concerned that my original 550w PSU was maybe getting maxxxed out when under a heavy load. Ive been hesitant to make any hardware changes because the last time I went tinkering around inside my case I accidentally set a hdd on fire trying to replace a bad drive, luckily I had dual parity setup and didnt loose any data.
  14. My current parity check has a estimated finish time of almost 10 days, its been a day since it started. Overall I have been having ton of docker issues, crashing, becoming unresponsive. Ive been thinking some piece of hardware is to blame, possibly to small of a PSU, to CPU cooling, or possibly a faulty raid controller. While my raid array is growing to nearly 25+ TB+ and a cache pool of about 800gb, ive but uninstalled all about Plex, deluge, krusader trying to troubleshoot, and its frustrating as I struggle to pin down what the underlying issues is. I attached my current dianostics, which may not be much as the system was just rebooted yesterday. At some point i had setup my syslogs to be backed up after each reboot, but im not sure where they are, they probably have much more context to my overall issues I have been having. If anyone could take a look at my logs, and maybe point me in a direction to get started id really appreciate it. Thanks in advance! ~Acps unraid-diagnostics-20190903-2033.zip
  15. Well just today I ran into some problems with my Parity 2 drive, its disabled now, and it had over 4801 errors after my most current parity check. So I might have some other serious issues going on, anyone have any direction for me to try and troubleshoot to see what is exactly going on? Here is my latest diagnostic reports. Ty in advance! ~Acps P.S> I attached my Enhanced Log Viewer results too. If anyone has a few min to look it over for me please. Ty guys! unraid-diagnostics-20190603-2247.zip unraidenhancedlogviewer-02JUN19.txt
  16. Anyone able to take a few mins to take a peek at my logs for me, its extremely inconvenient and I dont know where to start to try and fix it. Thanks in advance! ~Acps
  17. Here is my run command i think: The issue didnt start until i dicked up my cache pool when i installed 2 more ssds a few weeks ago. I lost all my appdata and had to resinstall dockers. I made a new docker image file too krusaderdockerlog.zip unraid-diagnostics-20190531-1425.zip
  18. I'm still having this issue where Krusader acts like its a fresh install and several prompts to set it up for the first time. It happens everytime i start the docker. Can anyone take a look at my log to see if there is any glaring issues? Ty in advance ~Acps
  19. I had to reinstall my cache pool of ssds this week, and i didnt have a backup saved since it was setup as raid 0. I reinstalled Krusader, however everytime i launch it, it prompts me a bunch of windows thinking its being setup for the first time, not sure what to try to fix it, so i attached my logs if you could take a peek. Thanks in advanced! ~Acps unraid-diagnostics-20190506-2052-06MAY19.zip krusader-binhex-logs-06MAY19.zip
  20. I was able to add them to my pool as raid 0 using 4 ssds in cache pool. Thank you for the help
  21. I only had like 4-5 dockers installed and no VMs at the moment. I was trying to add 2 more ssds to my cache pool when it got hung up. I unassigned all 4 ssd drives and precleared them, then assigned one to the cache to setup. Now i am working on adding one at a time to the cache pool to see if that will work, then maybe ill try to restore a backup of my appdata!
  22. Ok, well i ended up just starting over, i did have an appdata backup but its from Feb 18, which is better than nothing!
  23. I for some reason thought I could just plug and play 2 more SSDs into my cache pool for storage. I was able to get them installed and mounted, however it didnt go as planned, one of my original SSDs is now saying its unmountable with with no file system present, while the other still says its apart of a pool still. I even tried unplugging the 2 new drives to see if I could boot it up as before, but that didnt help. Can anyone take a look at my logs and see if there is something I did wrong? Thanks in advance! ~Acps unraid-diagnostics-20190502-2237.zip
×
×
  • Create New...