jargo

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by jargo

  1. My custom network type is already ipvlan, so that will not likely be the solution
  2. Experiencing the same. Cache pool formatted as ZFS
  3. This worked for me, thank you very much
  4. docker exec Nextcloud /var/www/html/occ preview:generate-all -vvv OCI runtime exec failed: exec failed: unable to start container process: exec: "/var/www/html/occ": permission denied: unknown How do I fix this permissions issue? Permission for the 'occ' file is set to 'nobody'
  5. How do I stop a copy operation? I had started a several terabyte copy operation in midnight commander from my array to an unassigned device, but it is painfully slow transferring to an NTFS disk. Midnight commander no longer shows the operation in the interface. I attempted unmounting the disk but that did not work, as well as attempting to stop all UD services.
  6. You need to map a host port of your choosing to the container port 80. You can do that in the "add path, port, or variable" section of the docker configuration.
  7. I can't get directories to load if they have a large number of files (~4000). Through a google search I found that adding the command "--disable-type-detection-by-header" may be the solution to my problem, but I do not know how to implement that on this docker. Adding it as-is into the 'extra parameters' field did not work for me. Could anyone help me with the syntax or where to load this in? Thank you
  8. I've noticed that all of my files have the field 'albumtype' separated into individual entries for each letter. albumtype: a, albumtype: l et cetera, creating many entries for something like 'album; compilation' A google search yielded this: https://github.com/beetbox/beets/pull/4582#issuecomment-1445023493 After doing 'beet write' to an album the files still show the same issue, even though beets said it was writing the corrected tags (I had tested with --pretend first) My database appears to be good, it is the files themselves with the bizarre tagging. A pretend 'beet update' shows that it would be writing the bogus tags into my database. Perhaps I don't understand the directions in the fix linked above, or is the issue that this docker container does not include the amended code? Thanks for any help
  9. Ok. And I can build both at the same time? Remove the one disk and add two new larger drives as dual parity, and then as long as I do no writes during the process the old parity remains correct? Forgive me as I resubmit an earlier question: for copy and move operations, is there a way to do multiple operations simultaneously? Disk 1 to disk 2 at the same time as disk 3 to disk 4 etc. Thank you for all of the help.
  10. Wouldn't rebuilding parity that many times place unnecessary stress on the drives? I suppose I should also note I am replacing the disks with fewer drives than the number presently installed, from 18 data to 12 data. So I would have to perform copy or move operations anyway, and then shrink the array by zeroing the drives to be removed?
  11. I am upgrading all of the drives in my server to larger capacity disks, and am planning on using unassigned devices to speed up this process, but I would like to confirm with someone more sophisticated that my plan will work and not be a complete waste of time. I have 19 disks in the array and 5 open slots. I connect 5 new disks to Unassigned Devices, preclear (to test them out), then copy (not move) the data from my array to the unassigned devices. I then remove the 5 unassigned disks and connect 5 more, etc. At the end, I remove my old array and connect all of the new disks I populated with data and create a new config, letting it build parity. This way I never lost parity on the old array, and if something happens during the parity build for the new config I could just put the old config back in and say parity is correct. Is what I just laid out accurate? Second question: is there a way to perform multiple copy operations simultaneously? Meaning disk 1 copies to disk 2 while disk 3 copies to disk 4, etc.? I don't think I can do this in Midnight Commander, which is what I normally use. Thank you!
  12. I went to reboot after downloading 6.5, and then the GUI disappeared and I could not connect. Connected a monitor and was receiving kernel panic. Through a search here I did some troubleshooting, including a check to see if the flash drive was ok on windows. There was a problem with it and it needed to be repaired. I can see the files on windows, and while I am not perfectly familiar with what should be there it does look alright. Booting with the repaired flash drive has failed, including safe mode. Through the CA plugin I have a backup of the flash drive on my array. I removed that disk and have attempted to access it on my PC and can't seem to get windows to recognize it. Should I be able to access whatever file system it is on windows? Is there a different way to access the backup? If I can't access it how should I proceed? Thank you for your help.
  13. I went to reboot after downloading 6.5, and then the GUI disappeared and I could not connect. Connected a monitor and was receiving kernel panic. Through a search here I did some troubleshooting, including a check to see if the flash drive was ok on windows. There was a problem with it and it needed to be repaired. I can see the files on windows, and while I am not perfectly familiar with what should be there it does look alright. Booting with the repaired flash drive has failed, including safe mode. Through the CA plugin I have a backup of the flash drive on my array. I removed that disk and have attempted to access it on my PC and can't seem to get windows to recognize it. Should I be able to access whatever file system it is on windows? Is there a different way to access the backup? If I can't access it how should I proceed? Thank you for your help.
  14. I have been having problems with timeouts, and now I can not even retrieve the torrent list. I either get Bad response from server: (500 [error,list]) or No connection to rTorrent. Check if it is really running. Check $scgi_port and $scgi_host settings in config.php and scgi_port in rTorrent configuration file. I am assuming this has to do with having many loaded torrents. Is there anything I can change in the settings to alleviate the problem? Thanks.
  15. Ok thank you, I tried that and was not having success, but while doing that I figured out my port was closed, and opening / mapping it appears to have solved all my issues. I take it I was not supposed to use my old port mapping configuration that is saved as a docker template? Should I start over without the template or is it fine now that I opened the port? Thanks
  16. Ok thanks. I am trying to put my old session folder in, and it appears to recognize all of the torrents in the webui, but none are connecting or show up to be sorted in the "State" portion, though I can sort them by tracker. Here is an error it is reporting: [16.08.2016 08:07:08] JS error: [http://192.168.1.50:81/js/webui.js : 1847] TypeError: s is undefined
  17. I just updated the container and now am getting this 502 Bad Gateway error: [16.08.2016 06:05:36] Bad response from server: (502 [error,getplugins]) Bad Gateway [16.08.2016 06:05:36] Bad response from server: (502 [error,getuisettings]) <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.10.1</center> </body> </html> Have tried restarting a couple times, no change
  18. I tried to set the GUI to update more frequently than every 3 seconds, but the setting has not changed. When I restarted the docker the setting still shows the new value but the GUI still updates every 3 seconds. Do I have to change the setting in a config file or something? Thanks.
  19. My array became read-only and I rebooted. Now my cache drive is unmountable, though I was reading files on it prior to reboot. I don't want to reformat it ideally, so what else can I try to fix the issue? Thanks.
  20. I used midnight commander. Could it also be that my previous SSD was not counting "free" space in the docker image and VMs, even if it was allocated? Perhaps that is the size difference I am seeing. I just know when I check the properties on a folder with many small files it will have a "size on disk" that is significantly larger than the sum of the files, so I figured transferring between the filesystems may have changed the size allocated to them.
  21. I replaced my cache drive (btrfs) by writing its contents to the array (an xfs drive) and then back to the new ssd (btrfs). Now the files take up significantly more space. I am pretty sure it is all of the small files, like plex appdata, that is now consuming more space. Is there any way to remedy this? It is a difference of about 40gb, so it is quite significant.
  22. I'd like to set things up so that all intermediate downloads are done on the cache drive, while the mass of completed downloads seed from a data drive. 1) If I set it up to move completed downloads to an entirely separate drive directly (not using cache), will the moving process complete even while deluge is attempting to seed? 2) Would it be better to move completed downloads to a share that utilizes the cache drive, so that the mover eventually takes care of them when the torrent is no longer seeding? 3) In setting up something like this, what is the risk that Deluge will not see my files and begin downloading them all over again? Anyway, if anyone has any suggestions for how they are running their setup to accomplish something similar I would like to hear about it.
  23. It's a nice case. One thing to note, the Seagate 8tb drives will not fit in the drive cages because they lack center mounting holes.