Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by jfrancais

  1. Thanks for the suggestions. A couple nervous days but after running 3 parity checks, the 3rd came back clean.
  2. nothing in the log pointing to an issue? Should I bring it back up and run parity and correct one more time and see if it fixes?
  3. running a memory test now. Also remembered when the system came back up I got an error that the bios was reset to defaults and needed the date. I replaced the bios battery and reset to defaults. Could be a bios setting playing havoc as well.
  4. Partly corrected, ran another one and it is finding more parity errors. Not good.
  5. Had a power outage last night and when the server came back up there were some issues. First, USB was completely corrupt. Pulled another into rotation and rebuilt from my USB backup. Booted up and partity check ran. When it completed I got Last check completed on Mon 06 Jul 2020 06:30:00 PM CST (today), finding 1645 errors That has me really worried. I have attached my diagnostics. Any thoughts as to what might be the cause? gobo-diagnostics-20200706-2011.zip
  6. I'm sure this has been answered before but for the life of me I can't figure it out. Heres my PHP info from the admin screen: Version: 7.3.17 Memory Limit: 2 GB Max Execution Time: 3600 Upload max size: 3 GB I was able to set memory limit and upload max size without issue, but that max execution time I can't find a way to change it. I need that bumped up but I haven't been able to find any configuration file setting to make it stick. Can anyone assist? I'm running into issues because of it.
  7. Those were two empty folders. When I changed the shares to no longer be on the cache drive some time ago, I had moved them off to the array but forgot the root share folder (which was empty). All the content inside were moved off to array already. Those empty folders have been removed from the cache
  8. I had two empty folders on cache that shouldnt have been there based on the cache-no shares. No content in them but they have been removed now. My cache disk is fairly full because I have 3 VMs with relatively large disks running on cache. That plus the Docker sizes consume most space.
  9. Name Container Writable Log --------------------------------------------------------------------- binhex-minecraftbedrockserver 3.46 GB 1.43 GB 4.66 kB binhex-minecraftbedrockserver2 3.46 GB 1.43 GB 4.44 kB binhex-krusader 2.47 GB 16.4 MB 16.7 kB freepbx 2 GB 0 B 22.8 MB binhex-nzbhydra2 1.11 GB 74.9 MB 71.9 kB plex 724 MB 302 MB 6.67 kB sonarr 622 MB 21.1 MB 24.2 kB ombi 606 MB 230 MB 11.4 kB radarr 574 MB 22.4 MB 16.1 kB NginxProxyManager 529 MB 193 kB 60.1 kB HandBrake 504 MB 85.3 kB 19.4 kB nextcloud 354 MB 40.2 kB 4.89 kB Nextcloud-DB 351 MB 344 kB 4.43 kB NginxProxyManager-DB 351 MB 344 kB 4.43 kB sabnzbd 260 MB 303 kB 9.55 kB headphones 232 MB 18.2 MB 9.18 kB steamwise 204 MB 0 B 7.48 kB steamwise2 204 MB 0 B 6.43 kB m3u8 167 MB 39.6 kB 7.41 kB transmission 78.1 MB 9.58 kB 6.92 kB I did a bit of clean up of some containers that were kicking around. This is what remains.
  10. I'm wondering if I have a bad SSD in my cache pool. Any recommendations to verify that? I deleted the docker.img and am redownloading apps and it seems to be quite slow going that process
  11. Can anyone assist? On my dashboard tab I noticed under usable size, memory was 100%. I ended up turning off Docker. that went down to 1%. Now when I turn Docker back on I get "Docker Service failed to start." Server reboot and problem remains. Can't access any of my containers. Can anyone assist? gobo-diagnostics-20200430-1531.zip
  12. 1. wired isn't an option, ipad/ios devices. I don't believe it to be a wireless issue. I can have two devices on the same wifi in the same room as the access point and it appears on one and not on the other. Typically the first person that opens the app will see the server and the second person that opens the app will not. 2. nothing should be in the way here either. I don't run adblockers or pihole. no vlans. Only tricky thing is the networking on unraid itself. I tried with custom networking/static IP as well as bridge networking and same occurs. 3. possible, unsure the best place to look for that, that is why I posted here 4. same as above.
  13. Been working on setting up a server using this container for my family of 3 to play. Internal network only. I have the initial configuration setup allowing 10 users, have my whitelist, permissions setup. Problem I'm seeing is that the server seems to be only seen intermitantly and usually by only one person. I've been able to connect and play, but other users on the lan not so much, and some times they can connect but I cant. Doesnt show up in the list in the app (all iOS devices) and manually entering the server details don't connect either. Having no issues with other containers (I use plenty) so I'm unsure where to go from here. Anyone else experience this? Thoughts?
  14. I still need to develop the image which currently involved the Pi. Once setup I pull the image and deploy to other Pis. I'd like to do all that configuration on the PC and deploy from there.
  15. For me specifically, I'm looking to do Raspberry Pi central management, development and testing before I send my images off to SD cards.
  16. Would love to spin up virtual raspberry pi's for development. Should be relatively straightforward to add with qemu already having ARM support available. Would love to be able to use the webgui to create virtual pis that mount and boot from standard pi image files.
  17. Single parity drive + 5 array drives and dual cache drives. Nothing out of the ordinary. I don't have have a high volume of read/writes on the array happening. Most of the time the drives are spun down with the exception of the Time Machine share which is currently always spun up. Don't think you are correct on the write/read thing. Disks not in use are spun down. If a write to the array caused a read from other drives then all disks would be spun up during writes. Unless the gui is incorrect, that is not the case.
  18. How long did your initial backup take? I'm over a week and it still isn't close to complete. gigabit network. TimeMachine share is one disk, no cache.
  19. Has anyone got TimeMachine working consistently with large backups. I'm trying to get our Mac (3TB) backing up to unraid over SMB. It has been running for days and still under a 100GB backed up and still in progress. I keep seeing references to it being slow but it can't be that slow can it? (Running Unraid newest 6.8, tried both SMB and AFP, same issues)
  20. OK, when I switched the container to bridge networking everything ran fan. When it was set to br1 with an ip, the container runs fine at first but not long after I can no longer communicate with the docker container. I have two nics in place (br0 and br1) that I set up to get around the macvlan communication restriction. It had been working fine for quite some time. Other containers set up this way have no communication issues. gobo-diagnostics-20190911-1808.zip
  21. OK, that gets rid of the messaging. Was open during my troubleshooting so that makes sense. I have switched it to bridge networking and will run it that way for a while to see if it fixes the issue. It was running as custom on br1
  22. Recently I've started to have issues with the sabnzdb container not working. It seemed that shortly after startup I was no longer able to communicate with it. If I shelled in to the container, everything looked normal, I could communicate from the shell with the entire network as expected. When I looked in my logs I was seeing this repeating every 10 seconds: Sep 10 08:39:58 Gobo nginx: 2019/09/10 08:39:58 [crit] 7367#7367: *5199919 connect() to unix:/var/tmp/sabnzbd.sock failed (2: No such file or directory) while connecting to upstream, client:, server: , request: "GET /dockerterminal/sabnzbd/ws HTTP/1.1", upstream: "http://unix:/var/tmp/sabnzbd.sock:/ws", host: "REPLACED" This occurs even when the container it stopped. I tried installing a different version of this and had the exact same issue. I don't think it is related to the container itself. Can anyone assist?
  23. Running 6.7.2 so I guess that is the issue. Is this actively being worked on for a fix? Copying to the cache drive is a non starter for me as I'm moving amounts larger than cache. Would this problem effect drives not in the arrive? I could temporarily add an external drive to copy on and off. Makes me a bit nervous as data wouldnt be protected in flight.
  24. I'm trying to move some files from /mnt/disk3 to /mnt/disk5 and I'm finding things painfully slow and it is affecting other things running on the server quite badly. I first tried with unbalance and the share I gathered averaged 8MB/S transfer speed in the report. doing the same thing via shell and the mv command is giving the same slow experience so it isn't the plugin. It seems like it burst with speed for a bit and then hangs for a while. Is this normal behavior? Any recommendations for speed up? Drivers are both GPT: 4K-aligned xfs formatted. Drives are all 5400 rpm and up, no archive drives in play. No errors or warnings in syslog. Single parity drive. System is a Intel® Xeon® CPU X5650 @ 2.67GHz X 2 with 48GB ECC ram. gobo-diagnostics-20190807-1401.zip