grphx

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by grphx

  1. Sorry to bump an old thread but did you by chance figure out what was wrong? I'm having the same issue with my R310. CUrrently updating all the firmware now... verified my media and memory is healthy already.
  2. My old nas is a homemade AMD based system with 6 drives(maxed out) My new nas is running a trial license so I could make sure it would work okay. The new nas is also Intel based and obviously has completely different hardware. Am I going to run into any issues if I simply move my USB from my old nas to my new one and also move the drives over?
  3. Been using unraid for several years now and I recently got some new hardware with the ablity to hold more drives and more NICs. This new nas is running a trial license of Unraid and is Intel. What's the best way to move my drives and config from my old nas to my new one? Potential issues I see is that the old nas is AMD based and new one is Intel. Am I going to botch my config when unraid tries to load and the entire system is different? Is there a way to transfer my current license from one USB(old nas) to another USB(on the new nas, currently running a trial) Is that even what I want to do?
  4. I think that's what I need to do, just not sure where the specific logs are saved.
  5. From 2a-5a every night something is using a ton of bandwith... looks like it's maxing out my upstream data. Is there some kind of way to figure out what is using so much data? I'm usually up around that time anyways so if I have to sit in front of some real time app I can.
  6. I am wondering if the rebuild or whatever it's called happens quicker if I have fewer/no dockers and VMs running. At first they all started when I started the array to start parity rebuild but since then i've shut down most of them. Does it actually help or should I just leave everything running. The ETA to complete the rebuild fluctuates a bit but basically says it will be done by late tonight and I started it late last night.
  7. So basically... 1. Shut down the computer and connect the 10TB 2. Turn on the computer and let Unraid load up. Before starting the array there is an option to set the new 10TB as a parity drive. 3. Start the array and Unraid will start rebuilding the parity to the new 10TB drive. 4. Once parity has been rebuilt to the 10TB drive, go and format the old 8TB parity drive and add it to the array Then, I have an additional 8TB of storage.
  8. I guess I don't know what all happens when you do a new config. At first it sounds data destructive but I'm sure I'm just not understanding it correctly.
  9. I have 3 4TBs, and 1 8TB currently and I want to add a 10TB HDD. So it goes without saying that my 8TB is my parity drive and when I add the 10TB drive, it will become my new parity drive. What's the proper way to add the disk?
  10. I was about to use rsync via command line but I was wondering if there was a docker/plugin with a pretty little interface that lets me select certain shares and tell it to backup to my google drive. Thanks in advance!
  11. I had 4x4TB drives and I got a new 8TB drive. Obviously if I want to add this to my system I have to replace my parity drive. So I pulled my parity 4TB drive out and put my 8TB drive in. Assigned it to parity and the rebuild started going. It's within a couple hours away from being complete(so it says) and I was just wondering how I can still use the other 4TB drive that I was using for parity. I'm assuming once the parity rebuild completes I can shut down my server and readd the old 4TB drive and assign it to a slot. Doing this will erase all the info on that disk, but that's okay right since the parity drive completed the rebuild? Once that happens then Unraid will start storing data on the 4TB again and thus increase my total storage from 12TB to 16TB?
  12. Running 6.3.4 I noticed that when I run top in CLI, mono is running at 100%. I've narrowed it down to Sonarr causing it, because when I stop sonarr, mono goes away.. or at least stops running at 100%. I checked the logs in Sonarr and it doesn't seem to be doing anything... actually it's pretty idle(no downloads, no scanning..) when I checked, but no matter what Sonarr seems to be doing, it causes Mono to run at 100% from the second I start it up, and runs at 100% for days on end. I checked the logs on the docker image itself and it doesn't seem to be doing anything. I've seen where some people found the docker continuing to try to auto update when it's already updated.. or it keeps trying to start the docker when it's already running, but that doesn't seem to be what's going on with my setup.
  13. I got Grafana installed but having trouble adding a datasource. From what I understand graphite is built in but unsure how to use it.. what port to use and all those other settings. I tried to use InfluxDB but the web gui doesn't load. I guess they removed it in a certain version so I'm unsure how to use it.
  14. Success! Repair was complete and my system is back up and running! Thanks a lot!
  15. I'm assuming this is "asking for -L" since I can't mount the disks.
  16. Oh yeah this looks scarier than the other disk. Should I run xfs_repair /dev/md1 or should I use any flags to the command?
  17. I'm guessing disk1 is the problem but please take a look yourself. tower-diagnostics-20170701-1624.zip
  18. I ran a check on a random disk. Unsure if this is the problematic drive or not. What would it say if it had corruption?
  19. Is there a way to run a check on all disks with one command or do I need to run it against each one separately?
  20. I'm unable to mount my 4 disks(3+1). It seems to take a while and then the web gui stops responding. I can still SSH into it but some functions(reboot) won't respond either. I've attached my diag zip file but if you can tell me where in the diag you are looking if you find out what's causing the disks to fail to mount, that would be appreciated. I'm all about learning. tower-diagnostics-20170701-1528.zip EDIT: Disk1 has filesystem corruption, I ran a repair on /dev/md1 and now I'm back up and running!
  21. Id like to use a plug in that monitors the disk usage both i/o and free space.. But mainly free space. Munin looks really nice and works well but doesn't track used/free space..unless I'm missing something.
  22. I had 2 drives + 1 parity drive and was almost full. I added another drive so now I have 2 drives that are about 95% full, and one drive that is 0% full. Is there a way to have unraid level out the capacity of the drives so not so much data is on the older drives? Is this even recommended?
  23. Thanks a lot for making this! Is there anyway to force the stats push to wherever you keep them? I want to contribute!... Also is there a place to see the stats or at least enough to know how long my disk(4TB) will take?
  24. The repair completed, and the disk mounted! It's running a parity check now. Looks like I"m making progress but I'm not going to trust that disk and replace it as soon as I can. Thanks a lot for the help!