MediaMaan

Members
  • Posts

    25
  • Joined

  • Last visited

MediaMaan's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. i relocated the flash drive to one of the front ports on the tower. It has been running fine this weekend in that configuration. I will keep monitoring, but the device was recently located next to a 50W RF dummy load. I am wondering if the close proximity of RF energy has been what caused the issues. The flash drive itself seems fine...
  2. I'm guessing you're correct about the Flash drive. In the top right corner I see Unraid OS no flash (in red). Will have to look up how to repair / replace the drive. Have performed a shutdown for now.
  3. Hi Everyone. I'm still getting used to the unRaid system, and it had been running flawlessly for me until recently. However I keep going to access my network shares, only to find the NAS unresponsive. When I log in to the Dashboard, the page doesn't render correctly. Scrolling down reveals the following messages : Warning: parse_ini_file(/boot/config/docker.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 50 Warning: array_replace_recursive(): Expected parameter 2 to be an array, bool given in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 50 Warning: parse_ini_file(/boot/config/domain.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.vm.manager/include/libvirt_helpers.php on line 478 So it looks like something in Docker is breaking down. The bottom of the browser shows "Array Started" and "Starting Services" keeps flashing. The shares are not accessible in this state. It seems to be occuring every few days - it has only been 5 days since the last reboot to solve the problem. Each reboot also forces a parity check (which always comes back clear, but is leading to unnecessary drive access). Right now I am unable to Stop the array. The unRaid logo just keeps waving on the screen and the message "Array Stopping•Retry unmounting disk share(s)..." appears at the bottom of the browser. I'm not aware of any power failures, and the server is hooked up to UPS with battery showing 100% anyway. Would love to know what caused this issue, and how I can prevent it from happening again. Anyone got some tips?
  4. Has no one tried this, or have you gone with an offboard solution like pfSesne?
  5. I'm using Binhex DelugeVPN which is working just fine for Deluge. However I'd like to use the VPN for the whole network, and not just for Dockers. So, when my phone / laptop / streaming stick / desktop PC etc try to connect to the web (not just for browsing), can they be made to route through the VPN? I figured it would be much quicker to have unRaid handle the encryption on a 1gig connection than my DD-WRT router! I know I can activate VPN clients on each device, but looking for automatic network wide routing via the VPN. If the Privoxy part of DelugeVPN would not be good for this, is there a Docker that would? I would also like to add a VPN for incoming connections too (so they appear as if they're on the local network). Is it possible to do that as well? Or am I going to be better finding some additional hardware and firing something like pfSense on it? Your comments are appreciated!
  6. Hello Rali72. Unfortunately I have not managed to figure it out yet. I am noticing the problem more on my Linux Mint computers. Right now my Windows 10 PC can access the share list at the unRAID IP address. The Linux Mint ones can read the list after a reboot at smb://10.x.x.x, but appear not to be able to show it on subsequent visits via SMB. The share lsit remains empty. Typing a share name in directly (smb://10.x.x.x/share) continues to work. At this stage I'm unsure if it is a Mint issue, or an unRAID issue.
  7. Running 6.9.2 Usually when pointing a file browser at smb://10.0.10.x all my shares are listed. I can click a share and view its files. Presently when pointing a file browser at the IP of unRaid, I'm getting 0 items. It doesn't time out or give an error, it just shows nothing. Happening on Linux & Windows computers. However all machines (Windows, Linux & Android) are able to access the individual shares. e.g. smb://10.0.10.x/share works fine. I can read, write, access files as normal. So where has the share list gone? The only config change I am aware of making has been SMB -> WSD options. I was trying to deal with the 100% CPU issue. I initially added the -i br0 argument, but still had issues. Everything worked as normal for a while. Then CPU usage spiked again, so I disabled wsd. NETBIOS is also turned off. I tried resetting these, restarting samba and bringing the array back online, but still had the same issue. So I put it back to SMB enabled with everything else off. To stop the 100% CPU from wsd, I would prefer to keep it disabled. But I also want to be able to view a list of shares on the server! What is my next step? Thanks, MediaMaan
  8. Thanks for the reminder of where to find the folder icon! I double-checked, and everything is still on the cache drive. So the files are downloaded to one share on the cache, but when they are moved by Deluge to another share on the cache, it is taking time (multiple minutes for a 6Gb file). Now that files being written to the array are ruled out (unless Deluge is moving cache -> array -> cache for some reason), what else could be causing the slowdown?
  9. There is only 1 SSD cache in the system, so definitely the same SSD. The /incomplete share is cache only (the SSD drive is the cache drive of course). The /download share is not cache only, but is set to use cache for new files. Could this be the problem? I guess I can try and verify by locating a file that gets moved and seeing exactly where it ends up. I think I had seen a way to check if files were on cache or in the array...
  10. Evening everyone. I created a cache only folder called /incomplete. This is for files that are downloading. Once they are completed, DelugeVPN (Binhex version) copies them to /Downloads/complete/. The /Downloads share has cache enabled. When Deluge initiates a move, the CPU jumps up to at least 45% usage, often spiking higher, and the move operation takes a long time. I would have thought copying from one SSD share to another would have been virtually instantaneous and required little CPU usage. When I had a /Downloads/incomplete folder (on the same share), I didn't have the issue. I wanted to keep the incomplete files on a different share so they didn't get written to the array under any circumsance - no point in spinning up the array for files still being worked on, even if they are stalled for a while. Anyone know why moving the files across shares on an SSD would take so long? Is there anything I can do, other than going back to using 1 share for both the /incomplete and /complete folders?
  11. I have a share with 984 files, totalling 2.3TB of data. It should be noted that the files are shared over 2 drives in my array. It takes computers / media players a long time to simply retrieve a file list from this folder. Sometimes they even time out. And this is with the disks spun up! Is the share simply too large? Is there a set number of files / file size it is better to keep files under? I also sometimes notice when viewing a share that a file list appears, then it grabs the file list from another disk and the file list changes within a few seconds. Is this also normal behaviour? System is an i3 2nd Gen with 6Gb RAM, 3 x 10TB disks.
  12. Thanks itimpi, I had no idea what that setting was for. 300 seconds seems perfectly acceptable to me, so I have now modified it 🙂
  13. Hi Folks. I haven't been able to find an answer to this question. I am trying to work out how frequently the drive temperatures are updated in the Dashboard & Main webgui pages. Anyone know? Once every 10 minutes? Or is it a more-or-less live reading from the disks? I am aware they don't show temps when spun down. Reason I'm asking is because I installed a shucked 10TB Western Digital air cooled drive (WDC_WD101EMAZ) that always ran hotter than the heliums in the system. It was typically sitting at 37C, but hit 60C during a parity check. This also bumped the other drive temps into the 40s during parity (they typically run 32-34C during normal use). I've rearranged the drives now so the air cooled 'hotter' drive is on top, and have now added an extra fan blowing across it. Have been running a stress test for the last 2 hours, pulling 5 streams from both my data disks (parity is spun down). Now the HGST helium drive is happily sitting at 32C (more or less same as before), and the 'hot' WD highest temp since adding the fan has been 29C. It has just settled at 28C. I'm just interested to know how often the temp stats update in the gui, and if there is a way to force a live reading (if that is necessary). Cheers!
  14. Thanks for the confirmation. Picked up a new 10TB disk today to replace it with. Just running a sector scan on it now (will take a good 17 hours or so). Hopefully will get the system fully functional again in next few days 🙂
  15. Ok, gave this a try. Disk 2 definitely showing some serious slowdowns. Slowest speed seems to be at the beginning of the drive. It starts at 29MB/s and drops to 7.46MB/s at 400GB, then ramps up to 150 MB/s at 800GB, and declines to 71.6 MB/s at the end of the drive with a few minor up & down bumps along the way. In comparisons, the 2 x 10TB drives start at 250 MB/s and slowly decline to 115 & 122 MB/s respectively by their end. I assume the major slowdown at the start of the drive is cause for serious concern, and warrants the replacing of the disk? If that is the case, do I remove it from the array, then used Unassigned Devices to relocate the data onto my main 10TB drive?