bb12489

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by bb12489

  1. Could someone explain the settings I would have to use for most of the measurements? I know that for some I would have to use the non_negative_derivative, but for which ones? Right now I'm trying to setup the CPU usage graph, but I'm not sure if it's actually translating the data correctly.
  2. You are awesome. Thank you so much for the work around! Everything seems to working great now.
  3. Just an update on things.... I nuked influxDB and unTelegraf, and the issue still persists. Made sure appdata was deleted for influx as well. I'm not really sure what is going on here now. I thought it would of been as simple as updating telegraf. I'd be interested to know if anyone else can replicate this from a clean install of Influx and Telegraf.
  4. I've confirmed that I did pull the latest update. Telegraf says it's version 1.0. Good call on the telegraf database though. I'll drop it and see what it does when it gets recreated.
  5. Thanks a bunch! Unfortunately it doesn't look like this fixed the issue. I'm still getting the "retention policy not found" error in telegraf log. Does influxDB have to be updated as well? I'm not sure what version the influxDB docker is on compared to what is out now. Thanks for looking into this!
  6. Is there any chance we can get untelegraf updated to the latest version? I'm running into an issue with it outputting metric to influxDB. I found the following post which pretty much describes the change that was made.... https://github.com/influxdata/influxdb/issues/7242
  7. DNS looks correct. Not sure what is going on. Only happened after the upgrade to beta 23
  8. I'm having this same issue. Non of the containers will update, and now I can't access Community Apps either to reinstall some containers.
  9. I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue. I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing. Hopefully the LT guys are reading this and notice a pattern of some kind here. I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately. I made the md_num_stripes change on mine, but havn't attempted any large file transfers since (and was tired of hard power cycling my server several times a day when doing previous trouble shooting. i reported this same problem since beta 16 i don't think they know or can reproduce this, worst is i can't clean power down what does md_num_stripes do and what should i set it as? to fix this problem. I haven't read up on what it does. Maybe someone else here can answer that. I did set the value to 8192 per the recommendation a couple pages back. So far things have been good. I'll report back if it ends up locking everything up again. Sent from my XT1575 using Tapatalk
  10. I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue. I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing. Hopefully the LT guys are reading this and notice a pattern of some kind here. I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately.
  11. I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.
  12. I'm having an issue with SMB shares yet again with beta 21 as I have in the past with 20 and 19. It seems whenever I transfers large files to or from an SMB share, it will spike my CPU on my unraid box to 50% constant, and lock up the file transfer. The only way to fix it is a reboot of the server. Has anyone else encountered this?
  13. SMB performance has seemed to be a lot slower than 6.1.9. I can only get about 30MB/s when transferring MKVs or ISO's to or from the server. Very odd. Has anyone else had this issue?
  14. copy bzimage and bzroot from the previous folder on your flash drive to the root of the flash, and delete bzroot-gui from the root of the flash. Ideally I guess you should restore the backup of the syslinux.cfg file, but that's not *strictly* required (you'll just wind up with an extra boot option which won't work) Also, in my testing because docker in 6.2 updates a whack of stuff in the docker.img file, you'll probably notice that you'll have no docker apps running once you downgrade. Easiest solution there is to delete your docker.img file and then recreate it and re-add your apps. I did try this yesterday actually. The results were not good. My array came up fine, but docker was screwed up either way. First thing I did was delete the img before spinning up any dockers. I kept running into errors when I went to start the dockers on a new img.
  15. Sorry about the typo. I did mean 6.1.9. Is there a proper way to downgrade from 6.2B19? Also I have checked for a firmware update for the card but there doesn't appear to be one. I did read about the problem with Marvell chips though. If anything I'll just exchange this card and try to find another one that works.
  16. So I tried to do a downgrade using just the bzimage and bzroot files for 6.1.0, but that caused more problems than it solved. Docker would not work at all, VM's would not function. The array is available though to access my data. Is there any better way to do a downgrade? I'm also troubleshooting issues with a new Startech PEXSAT34RH sata card on beta 19. It seems that my disks are recognized, but after some time they will stop showing smart stats and temps. When I take the array offline, the drives are no longer being seen by unraid. I'm hoping this is just a problem with the beta and not an overall incompatibility with unraid and the card.
  17. What's the procedure to downgrade from 6.2 to 6.1?
  18. I just posted in the thread about the Webui not responding while trying to stop the Sabnzbd docker https://lime-technology.com/forum/index.php?topic=39510.0. This just started a day after upgrading to the 6.2 beta. Also I've been having similiar issues with samaba shares. Windows Explorer will hang for quite a long time while trying to access any of my shares. Here is my syslog that I was able to pull through putty. I hope it helps! EDIT: I was also just able to grab my diagnostics zip as well EDIT2: Come to find out, my issue with Samba shares was related to my Sabnzbd issues. I've left my Sab docker off since my last hard reset, and I've been able to play media files just fine. Explorer no longer hangs. Unfortunately I'm still unable to resolve my issue with Sabnzbd. I'd be interested to know if anyone else who has upgraded to 6.2 has started to have issues with their Sab dockers? syslog-2016-03-17.zip tower-diagnostics-20160317-2351.zip
  19. So my plan is basically this.... 1. Set shares back to include all disks. 2. Taking a screen shot of the assignment anyways ( just feel safer doing this lol) 3. Click new config, then assign my drives to the slots, minus the one I am removing. 4 Shutdown and remove the 1TB drive, and install the 3TB drive 5. Power up and assign the 3TB to a slot 6. Start the array and have all my data available! Sound about right?
  20. Do it before or after, same result. Awesome. Thank you so much for the quick replies!
  21. Without parity you can assign your data disks any way you like, it won’t make any difference unless you’re including and/or excluding disks for shares, in that case you should assign them as they were or correct the include/exclude options for the new order. Right now I am excluding disk 1 (the 1TB) from all my shares so that nothing else would be downloaded to it. I will be changing this back to include all disks with the 3TB taking the place of the 1TB as disk 1. Should I set the shares back to include all before I pull the 1TB and install the 3TB?
  22. No you won’t lose any data, just make sure no data disk is assigned to the parity slot, the other ones you can assign however you like. Ok. I just received my 3TB Red in the mail, so I will be pulling my 1TB drive from the array and replacing it with the 3TB. When I do a new config; do I have to assign the disks in the exact same slots? I kept reading about this in other threads.
  23. So by doing this I won't lose any data that is already on the drives when I do new config? Will I have to leave the slot empty of the drive I am removing when I do a new config?
  24. So I'm in the process of getting more drives; one of which will be my parity eventually, but for now I have no parity drive install in my array. I understand this make my data unprotected from a drive failure at the moment. What I'm trying to do though is remove a now empty drive from my array to replace it with a larger one, but to keep the current array & data intact. Is this possible to do without a parity drive installed? I see that I can't just simply un-assign the empty drive from the array or else it says the configuration is invalid. Any help with this would be appreciated!