Nozlo

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nozlo's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Gotcha. Shutdown docker, set system share to cache only, run mover and in theory problem solved? I am assuming docker will need to be told where the vdisk was moved to? edit: Moved system share over to cache drive and will figure out backup solutions. Looks like no more unnecessary read / writes are on the array! Thanks for your help!
  2. Sounds good. Went ahead and deleted that appdata directory on disk2 manually. It looks like the disk continues with constant writes. I have file activity plugin installed and enabled and it shows absolutely nothing. Could the constant writes on disk2 with both parity's just be normal?
  3. That would most definitely help. Attached is the diagnostics zip. To add - I had other appdata directories on different disks that were old and unused containers. I manually deleted them and on those disks the constant writes appear to have stopped. The last disk is disk2 which remains to constantly r/w with an appdata directory containing a plex media server folder. For some reason, the mover will NOT move this directory off of the array even with disabling docker. I have also turned the plex container off and the r/w continue. homeserver-diagnostics-20231213-1620.zip
  4. Hello, I have a two parity array with two disks on the array that are constantly writing and cant seem to figure out/fix the issue. I have appdata moved onto the cache drive with an appdata backup service running. After running the mover array --> cache multiple times it still doesn't seem to get everything off of the array and onto the cache drive. One thing to note it looks as if the docker default appdata location is /mnt/user/appdata/ I did try to change this to my cache and it reset all of my docker containers and had to re-map to the original location.
  5. Yes confirmed that. I will create a cron task it will appear in the list allowing modification and removal everything will look great. I will restart the container and it will disappear from the list but will remain in the view current crontab page. Also is there a way to see the status and completion status of current cron job tasks? update: It looks as if through testing the cron will work although I am unable to modify or delete existing ones.
  6. Using Lucky Backup. Cron jobs will not run. Method - create cron job to run daily at 3 mins in future to test, container will have been reset after confirming cron job has been made. After restart cron will disappear from list, although when clicking "view current cron tab" it looks like it shows. The task will not show that it has ran via the gui. Container is being ran as root to access appdata and share files. Appreciate the help!
  7. Trying to figure out if this will work. I have this LSI SAS9201-8i raid controller mounted to my motherboard originally to allow for more drives via a mini SAS 8087 to 4 sata connectors. Works flawlessly (mntherboard only has 4 sata which are used). I recently came into quite a few SAS drives. That being said, would I be able to use a mini SAS 8087 to 4 SFF-8482 Connectors with SATA Power connected to the second port on the LSI raid controller and have the SAS drives work?
  8. It doesnt shut off actually, it just becomes completly unresponsive. I cannot ping it, nor access gui. I have my parity drive unplugged to see if it still becomes unresponsive just having unraid running and sure enough it does. Even at idle, meaning, containers off, no VM's running, no parity check it still manages to become unresponsive forcing me to hard power off.
  9. attached is the file. homeserver-diagnostics-20210531-1449.zip
  10. Hello! Recently came into the issue where I ran out of sata ports on my unraid box (specs will be listed below). To get around this I bought the raid controller listed above and installed this into the 1 PCI-E slot on my motherboard. I then bought a 1x - 16x pcie riser to get my GPU into the rig (computer will not post without GPU). I then hooked up some extra drives. Ran great. For about 8 - 16 hours. I was trying to access plex and found the server was off. Went to check, everything is still spinning and lit up its just the server is completly unreachable. This leads to force shutting down the server. Then a flag threw on the parity drive, tried to re-check the drive after the re-boot, about 14 hours in (70% check) it will do the exact same thing. Okay, its the drive that is the issue. Threw in a new parity, went to rebuild and boom 80% in, same exact thing happens. I am now leaving it off until I can go about trouble shooting this. The drives especially when rebuilding can become very hot. If there are any needed log files, I can boot the server back up and grab what is needed.
  11. Running NC in a container with MariaDB and reverse proxy (letsencrypt). I store a ton of pictures and videos (around 600G all on HDD on unRaid). That being said, my video playback on nextcloud is basically unusable. When I click on a 4K video to playback, it will load forever and it will eventually return a 504 bad gateway error and basically crash the container. The server should be able to handle this without an issue in all honesty. I have tried configuring NGINX for longer timeouts, continues to throw bad gateways. I have also tried an app in NC called preview generator. Hasn’t done any good. Haven’t seen this issue too much searching around. What I have found, I have tried and still no positive results.
  12. What I am getting at from your reply is it is not possible to transfer at that speed with my current setup. I would need to by 10GbE cards for the computers that I want transfering at that speed. Why is task manager showing I am sending files at 800+Mbps?
  13. I will include screenshots for a visual example. I am hardwired into my LAN which is capable of gig speeds. That being said, when I transfer a file from my PC to a share, also loacted via a SSD cache drive, it is transfering at 90 to 110MB/s. I just transfered 110gb of files over to the server and it took 20 minutes. My PC was sending around 800Mbps out and the server was recieving about 800Mbps, so that makes sense. At first I thought that it was 110Megabytes per second which would equal 880Mbps - but with that calculation, it should have transfered all of this data in 3.8 minutes - ALMOST a gig a second. Am I missing something here? The SSD cache drive should be transfering data a LOT faster. Here are screenshots Task manager of ethernet on main computer - https://gyazo.com/dcc304ead7921b3f7b01dc4a01e90fa1 Task manager of disk drive on main computer - https://gyazo.com/b01d007965c51ec18472f1d8ea76ef39 Transfer screen - https://gyazo.com/562c492d02d78bb880888f4883386d4a unRAID network interface - https://gyazo.com/5b14100f8600d6a903d494efeec5d7c0 unRAID stats panel - https://gyazo.com/831179ebde5ef69923c1ac7b95d1d30a
  14. I have my nextcloud running in a ubuntu VM which has been working well. Everything else I have (plex, radarr, ubiquiti controller, etc...) is running in containers which makes setup very easy. That being said, is it worth it for me to switch over to runnning nextcloud into a container? I am running into issues where the allocated ram will be too much and containers will start to crash. Looking to see what solutions are out there. Also, I am in need of a solid backup solution for the data inside the nextcloud VM. Currently, I have the nextcloud app syncronized to my desktop which I am considering to upload the entirety of data to google drive or something of the sorts.
  15. Hello unRAID community I recently installed Nextcloud through a container and since want to give the VM world a shot with Ubuntu. Currently, I would like to know, that in the future when I run out of storage I will be able to expand the storage capacity on my Ubuntu VM which will also allow more storage on Nextcloud. Before I go through this process, I have Ubuntu running in a VM and I assigned it 50G amount of storage and I am just doing my best to expand the storage before I go and get nextcloud all set up. Let me explain the steps I have completed. 1) Install Ubunu onto VM and install Snap of nextcloud 2) Power down VM and change storage capacity from 50G to 70G 3) Run the command Fdisk - https://gyazo.com/de68fad3441436559bd10cb43e8fda17 From when I first installed nextcloud, I formatted the entire drive (I would not wanted to have done this if I actually had all of my data on nextcloud. I am currently just trying to get this to actually safely extend the storage.) When I go to format the entire 70G that it says I have available, it only ends up being 50G linux file system. This whole process I am not grasping and I am wondering if there is anyone that has gone through a similar issue/fix to it. GParted needs a GUI that will not launch on Ubuntu to extend the partition. If anyone can tell me what I am doing wrong, or a guide to fix this would be great. Thank you all in advance.