Jump to content

jargo

Members
  • Posts

    45
  • Joined

  • Last visited

Posts posted by jargo

  1. docker exec Nextcloud /var/www/html/occ preview:generate-all -vvv


    OCI runtime exec failed: exec failed: unable to start container process: exec: "/var/www/html/occ": permission denied: unknown

     

    How do I fix this permissions issue?  Permission for the 'occ' file is set to 'nobody'

  2. How do I stop a copy operation?  I had started a several terabyte copy operation in midnight commander from my array to an unassigned device, but it is painfully slow transferring to an NTFS disk.  Midnight commander no longer shows the operation in the interface.  I attempted unmounting the disk but that did not work, as well as attempting to stop all UD services.

  3. I can't get directories to load if they have a large number of files (~4000).  Through a google search I found that adding the command "--disable-type-detection-by-header" may be the solution to my problem, but I do not know how to implement that on this docker.  Adding it as-is into the 'extra parameters' field did not work for me.

     

    Could anyone help me with the syntax or where to load this in?  Thank you

     

     

  4. I've noticed that all of my files have the field 'albumtype' separated into individual entries for each letter.  albumtype: a, albumtype: l  et cetera, creating many entries for something like 'album; compilation'

     

    A google search yielded this:  https://github.com/beetbox/beets/pull/4582#issuecomment-1445023493

     

    After doing 'beet write' to an album the files still show the same issue, even though beets said it was writing the corrected tags (I had tested with --pretend first)

     

    My database appears to be good, it is the files themselves with the bizarre tagging.  A pretend 'beet update' shows that it would be writing the bogus tags into my database.

     

    Perhaps I don't understand the directions in the fix linked above, or is the issue that this docker container does not include the amended code?  Thanks for any help

     

    image.thumb.png.538ea85db892ee0024ed29595d593d88.png

  5. 4 minutes ago, trurl said:

     

    You should start by adding another parity.

    Ok.  And I can build both at the same time?  Remove the one disk and add two new larger drives as dual parity, and then as long as I do no writes during the process the old parity remains correct?

     

    Forgive me as I resubmit an earlier question: for copy and move operations, is there a way to do multiple operations simultaneously?  Disk 1 to disk 2 at the same time as disk 3 to disk 4 etc.

     

    Thank you for all of the help.

  6. 19 minutes ago, trurl said:

    Replace/rebuild will probably be quicker and just as safe if you keep the original disks until finished.

    Wouldn't rebuilding parity that many times place unnecessary stress on the drives?  I suppose I should also note I am replacing the disks with fewer drives than the number presently installed, from 18 data to 12 data.  So I would have to perform copy or move operations anyway, and then shrink the array by zeroing the drives to be removed?

  7. I am upgrading all of the drives in my server to larger capacity disks, and am planning on using unassigned devices to speed up this process, but I would like to confirm with someone more sophisticated that my plan will work and not be a complete waste of time.

     

    I have 19 disks in the array and 5 open slots.  I connect 5 new disks to Unassigned Devices, preclear (to test them out), then copy (not move) the data from my array to the unassigned devices.  I then remove the 5 unassigned disks and connect 5 more, etc.  At the end, I remove my old array and connect all of the new disks I populated with data and create a new config, letting it build parity.

     

    This way I never lost parity on the old array, and if something happens during the parity build for the new config I could just put the old config back in and say parity is correct.

     

    Is what I just laid out accurate?

     

    Second question: is there a way to perform multiple copy operations simultaneously?  Meaning disk 1 copies to disk 2 while disk 3 copies to disk 4, etc.?  I don't think I can do this in Midnight Commander, which is what I normally use.

     

    Thank you!

     

     

  8. I went to reboot after downloading 6.5, and then the GUI disappeared and I could not connect.  Connected a monitor and was receiving kernel panic.  Through a search here I did some troubleshooting, including a check to see if the flash drive was ok on windows.  There was a problem with it and it needed to be repaired. I can see the files on windows, and while I am not perfectly familiar with what should be there it does look alright.  Booting with the repaired flash drive has failed, including safe mode.  Through the CA plugin I have a backup of the flash drive on my array.  I removed that disk and have attempted to access it on my PC and can't seem to get windows to recognize it.

     

    Should I be able to access whatever file system it is on windows?  Is there a different way to access the backup?  If I can't access it how should I proceed?  Thank you for your help.

  9. I went to reboot after downloading 6.5, and then the GUI disappeared and I could not connect.  Connected a monitor and was receiving kernel panic.  Through a search here I did some troubleshooting, including a check to see if the flash drive was ok on windows.  There was a problem with it and it needed to be repaired. I can see the files on windows, and while I am not perfectly familiar with what should be there it does look alright.  Booting with the repaired flash drive has failed, including safe mode.  Through the CA plugin I have a backup of the flash drive on my array.  I removed that disk and have attempted to access it on my PC and can't seem to get windows to recognize it.

     

    Should I be able to access whatever file system it is on windows?  Is there a different way to access the backup?  If I can't access it how should I proceed?  Thank you for your help.

  10. I have been having problems with timeouts, and now I can not even retrieve the torrent list.  I either get

     

    Bad response from server: (500 [error,list]) or

     

    No connection to rTorrent. Check if it is really running. Check $scgi_port and $scgi_host settings in config.php and scgi_port in rTorrent configuration file.

     

    I am assuming this has to do with having many loaded torrents.  Is there anything I can change in the settings to alleviate the problem?  Thanks.

  11. Ok thank you, I tried that and was not having success, but while doing that I figured out my port was closed, and opening / mapping it appears to have solved all my issues.  I take it I was not supposed to use my old port mapping configuration that is saved as a docker template?

     

    Should I start over without the template or is it fine now that I opened the port?  Thanks

  12. I just updated the container and now am getting this 502 Bad Gateway error:

     

    [16.08.2016 06:05:36] Bad response from server: (502 [error,getplugins]) Bad Gateway

    [16.08.2016 06:05:36] Bad response from server: (502 [error,getuisettings]) <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.10.1</center> </body> </html>

     

    Have tried restarting a couple times, no change

  13. How did you copy the files?  Not an expert on symlinks (read that as I know nothing), but plex makes very extensive use of them, and I suppose they could have gotten resolved to actual files.

     

    I used midnight commander.  Could it also be that my previous SSD was not counting "free" space in the docker image and VMs, even if it was allocated?  Perhaps that is the size difference I am seeing.

     

    I just know when I check the properties on a folder with many small files it will have a "size on disk" that is significantly larger than the sum of the files, so I figured transferring between the filesystems may have changed the size allocated to them.

  14. I'd like to set things up so that all intermediate downloads are done on the cache drive, while the mass of completed downloads seed from a data drive.

     

    1)  If I set it up to move completed downloads to an entirely separate drive directly (not using cache), will the moving process complete even while deluge is attempting to seed?

    2)  Would it be better to move completed downloads to a share that utilizes the cache drive, so that the mover eventually takes care of them when the torrent is no longer seeding?

    3)  In setting up something like this, what is the risk that Deluge will not see my files and begin downloading them all over again?

     

    Anyway, if anyone has any suggestions for how they are running their setup to accomplish something similar I would like to hear about it.

  15. The node 804 is currently on sale at Newegg for $70, so if that is the case you want I would recommend buying it today.

     

    The 804 allows for full size PSUs, so you can cut some cost there instead of getting the SFX one.  While I am not the best to speak on this, I don't think ECC memory is really necessary and a lot of people are using non-ECC, so you could save a bit there.

     

    Also note that you will need a parity drive, so if you buy 2 4TB drives, one will be the parity drive and you will have 4TB of storage.

     

    I think if you want to use the VM out of your box over VGA/HDMI you may need a dedicated GPU in addition to the onboard one, but I am not certain about that.

×
×
  • Create New...