• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

smburns25's Achievements


Rookie (2/14)



  1. The plugin seems to be working fine with the exception of these identical drives. If I click Locate, none of these disks show a blue light while all of my other disks worked correctly. Any thoughts on why this would be? Thy all have smart enabled. Also, if I click the bottom button, the disk directly above it starts to flash, but not that one. Again, thoughts?
  2. Just figured that out. When I clicked on support thread for the plugin it took me here and I just started typin' away......
  3. @olehj I just wanted to take a second to say thank you. I just upgraded to 6.7.2 and updated the plugin and it now works. No more blank screen and I have started locating the drives. Thanks again for all of your help and troubleshooting!
  4. --version returns 7.0 All of the drives have a serial number and each one states that SMART support is enabled. I appreciate all of your help on this.
  5. I ran the first command and it returned a 1. I ran the query and it just returned me to the prompt. I am assuming that a NULL status not returning is a good thing for the query, but not so good for solving my issue.
  6. My screen matches what I see above from the ls command, but I still get a blank screen.
  7. Thank you! Much appreciated.
  8. I am trying to troubleshoot an issue and need to remove all of my plugins to finish my diagnostics. I have uninstalled the Dynamics System Info and Active Streams, but they have left behind two folders; dynamix and dynamix.kvm.manager. No matter what I try to do I cannot delete these folders or any of their contents. Does anyone know of a way to COMPLETELY remove these plugins from my system? I am not very good with linux, so the easier the solution the better. Thanks for your help.
  9. I checked the drives individually using lsscsi, but nothing appears out of sorts. One of the log statements leads me to believe it may be a clash with a Dynamix plugin. I uninstalled all of the plugins, but the Dynamix plugins leave behind folders in config/plugins and no matter what I do I cannot delete them or any of the contents of the folders even though the plugins have been removed. Since the drives and the system appear to be working I have t believe that it is something else interfering and this is about all I have left. Any ideas how I can kill these folders even with the locks that are on them? Thanks
  10. I have a mix of drives from 1Tb to 3Tb. Some of the drives are less than a year old while others are several years old. I uninstalled the app and reinstalled it and a rebooted the server just to be sure. Here is the output of the lsscsi commend root@Tower:~# lsscsi -u -g [0:0:0:0] disk none /dev/sda /dev/sg0 [2:0:0:0] disk 50024e9204a9336a /dev/sdb /dev/sg1 [4:0:0:0] disk 50024e92048ae6a9 /dev/sdc /dev/sg2 [5:0:0:0] disk 50014ee2ac28fe18 /dev/sdd /dev/sg3 [6:0:0:0] disk 50024e9204a9337c /dev/sde /dev/sg4 [9:0:0:0] disk 50014ee2017e3496 /dev/sdf /dev/sg5 [11:0:0:0] disk 50024e9001dd33b0 /dev/sdg /dev/sg6 [12:0:0:0] disk 50014ee0abce5c0f /dev/sdh /dev/sg7 [13:0:0:0] disk 5000c5004f5e11ec /dev/sdi /dev/sg8 [14:0:0:0] disk 50024e9001d3fc49 /dev/sdj /dev/sg9 [15:0:0:0] disk 5000c5004f6566b8 /dev/sdk /dev/sg10 [16:0:0:0] disk 50024e9001d3fc89 /dev/sdl /dev/sg11 [17:0:0:0] disk 5000c5004f653a53 /dev/sdm /dev/sg12 [18:0:0:0] disk 50024e9001d3fc4d /dev/sdn /dev/sg13 [19:0:0:0] disk 50024e9001dd3393 /dev/sdo /dev/sg14 [20:0:0:0] disk 50024e9002c555aa /dev/sdp /dev/sg15 I have a cache and a parity drive plus 12 array drives. Does this help?
  11. I just installed this plugin and when I go to settings and click on Disk Location, the screen just comes up blank. I am running Unraid 6.6.7 and have tried Settings from both Firefox and Chrome. Neither of them will give me anything other than a blank screen. I have also let the browser sit there for several hours, but that did nothing. Also, I installed this from the Apps page and NOT from GitHub. EDIT: I found the following in the Log, but I have no idea what it means or how to correct it. May 3 10:30:24 Tower nginx: 2019/05/03 10:30:24 [error] 3010#3010: *2807938 upstream timed out (110: Connection timed out) while reading upstream, client:, server: , request: "GET /Settings/disklocation HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "tower", referrer: "http://tower/Settings" Thoughts?
  12. No, I would not know how to manually edit the .cfg files. I did as you stated above and turned them all off and on again and invoked the Mover again. The drive space is now gradually increasing on the Cache drive so I am assuming it is now working. It's odd that this happened as I tend to leave the server alone and only do the software updates. I have no docker apss and only a very few system utility apps installed. Thank you for you help on this!!!
  13. I just ran Diagnostics from the Tools menu and attached the file here.
  14. My cache drive is showing as 99% full (1 Tb Drive) with files as old as December of 2016. I have manually invoked the mover and the log states that mover has started and finished with no errors showing on the log at all. I have also compared several of the files on the cache drive with the files currently in their shares and the versions appear to be exactly the same. In an effort to resolve this I upgraded to 6.3.2 and invoked the mover after doing a reboot, but the drive is still showing as almost full. Any ideas on what I should check or what steps I need to perform to get the Mover working correctly? Thank you EDIT: All of my shares are set to use the cache drive and "Use Cache Drive" in "Global Share Settings" is set to "Yes"
  15. Thank you both for the replies. I generally only want to see the ind. disks so that I can make a copy of the directory structure in case I lose two disks. Makes it easier to know what is gone and what I have to copy back. The workarounds for this seem a "little" complicated and I hope a real solution is put in place soon. Thanks again!