Jump to content

Gico

Members
  • Content Count

    185
  • Joined

  • Last visited

Community Reputation

6 Neutral

About Gico

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • Location
    Israel

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Well that didn't work as planned because apparently moving a drive to another slot counts as removing and adding a drive, and that's not allowed, so I had to do a new config, and a full 2 parity drives parity rebuild is in progress.
  2. Yes I want to reorder the disks as it helps me manage them: Each group of disks contains a different media type. Sequential disk numbers are easier to remember than random disk numbers.
  3. OK. I will reorder existing disks. Is the procedure I wrote in the first post of this thread is correct?
  4. AFAIK Parity 2 is affected because it's parity algorithm includes the disk position. The new data disk would be added as the 9th disk in the array, and it won't be the last one, so some other disks would be reassigned to another slot.
  5. I have an array with 2 parity disks. I would like to add a data disk to the array without losing parity of the first parity drive, in order to have redundancy while building parity 2. Is the following procedure is correct? 1. Pre-Clear the new data drive. 2. Stop the array. 3. Add the new data drive. 4. Unassign Parity 2. 5. Check "parity is valid". 6. Start the array and format the new data disk. 7. Stop the array and reassign Parity 2. 8. Start the array. This would rebuild Parity 2. Would it also check the Parity disk or should I run another parity check after Parity 2 is built?
  6. Stumbled upon this. That's something to look for.
  7. September 8th update: "Don't check for docker apps having an update available for them unless running 6.8" So not checking for docker apps updates until 6.8 installed? I must be reading this wrong, lol.
  8. Congratulations! Dreamed of a NAS server for years, and when financially able to build one, jumped into the Unraid Waggon about 3 years ago , and been happy about it ever since.
  9. My hardware has some fault, that every few months it has an event of multiple errors on multiple disks, followed by multiple disks dropped from the array, and it turns out that the fs on these disks (physical + their array copy) is corrupted, and then my data is at the mercy of xfs-repair. I suspect my on-board HBA (the only HBA on my server) has a warming issue, so I try not to overload it for now. I cannot build another backup server at this time, but I have some disks to backup my array data. I am looking for a simple, reliable, automated (scheduled) solution to backup (actually file-sync) my Unraid array to unassigned devices, I was offered to create a logical volume from the unassigned devices using Linux LVM, than rsync the array to it. Is it theoretically possible to do? Any thoughts / suggestions / other solutions before I dive into learning Linux LVM and rsync parameters? Another issue is that rsync might be destructive if one of the source disks and it's array copy have fs corruption, and the automated rsync would delete this disk content from the backup logical volume. So maybe run a dry run of rsync and some (automated) examination of the expected results before running the actual sync? This solution is getting more and more complicated.
  10. One of the proposed solutions was to change the containers mappings from "/mnt/user/appdata" to "/mnt/cache/appdata". I'm doing that now, takes a lot of time, however I got help and it seems like that cache_dirs is running: find /mnt/cache/appdata -noleaf and with a plex container, this takes a very long time. Edit: Excluded appdata from cache_dirs, rebooted, and all seems ok now.
  11. Server is very slow in the last couple of weeks. I don't feel it in SMB but very much in GUI actions. Containers updates takes very long and often doesn't end at all. Diagnostics also sometimes doesn't end (not downloading the log). It's a bit better when the containers are shut down, but still one container can take half hour to update, and another doesn't finish at all. I restarted the server this morning and problem still exists. Downgraded from 6.7.0-RC7 and problem still continues. Attached diagnostics and screenshot of CPU usage and top command. juno-diagnostics-20190426-1601.zip
  12. Thanks. Already did it. Everything works great.
  13. When I replaced the cache pool, I shrunk it from 5 small disks to 2 larger ones, but couldn't change the number of cache pool devices (it was grayed out), so it remained at 5 devices, with 3 empty slots. When I stopped the array now, I noticed that again, changed the number of slots to 2, started the array, and the docker started fine and created a docker image, obviously empty from containers. I disabled the Docker, copied the previous image, started the Docker, and it started with the containers and the following message, so I'll recreate the image and add the containers. Thanks for the help. Your existing Docker image file needs to be recreated due to an issue from an earlier beta of Unraid 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!
  14. Not sure it's 6.7.0-RC5 related problem but I did the cache (pool) replacement routine, and now the docker service won't start, either when I let it create the docker image, or copying a the pre-cache-upgrade docker image from a backup. ------------------------------ Docker Service failed to start. Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 655 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 827 Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 655 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 891 No Docker containers installed
  15. Logical disk9 and physical disk9, each repaired separately, are identical. So I lost 660 MB of data, and have 1.6 TB to sort. I consider myself lucky. I will have to compare current Kodi / Plex DB to a refreshed one after I will sort the lost+found. Thanks a lot for the help, johnnie!