Jump to content

Yeyo53

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by Yeyo53

  1. Hello,

    After having problems with the upgrade from Nextcloud 23 to 26 a couple of months ago, It has stopped working again.

     

    I am in 26.0.1 and Nextcloud is in Maintenance Mode after a reboot (I did it because I can't login).

    In the Nextcloud docker log I see this:

     

    using keys found in /config/keys
    **** The following active confs have different version dates than the samples that are shipped. ****
    **** This may be due to user customization or an update to the samples. ****
    **** You should compare the following files to the samples in the same folder and update them. ****
    **** Use the link at the top of the file to view the changelog. ****
    ┌────────────┬────────────┬────────────────────────────────────────────────────────────────────────┐
    │  old date  │  new date  │ path                                                                   │
    ├────────────┼────────────┼────────────────────────────────────────────────────────────────────────┤
    │ 2018-08-16 │ 2023-04-13 │ /config/nginx/nginx.conf                                               │
    │            │ 2023-04-13 │ /config/nginx/site-confs/default.conf                                  │
    └────────────┴────────────┴────────────────────────────────────────────────────────────────────────┘
    **** The following site-confs have extensions other than .conf ****
    **** This may be due to user customization. ****
    **** You should review the files and rename them to use the .conf extension or remove them. ****
    **** nginx.conf will only include site-confs with the .conf extension. ****

     

    Should I remove those fields? I dont think I have modified it and they look pretty similar to the sample files.

    Is this the reason why my Nextcloud has stopped working?

     

    Thanks in advance.

     

    EDIT1: I updated through terminal and maintenance mode is now off (im in 26.0.2) but Im an internal server error when login again. Do you know which log should I be checking to get more info? I have been reading it could be the Mariadb or an incompatible app.. Btw, the message above with the .conf files is still in the log

     

    EDIT2: I have check the /config/log/nginx/error.log and I have found this error:

     

    2023/05/29 13:30:08 [error] 288#288: *5 FastCGI sent in stderr: "PHP message: {"reqId":"JcLdeQjsPRdnVkTf10Ry","level":3,"time":"2023-05-29T11:30:07+00:00","remoteAddr":"47.61.125.213","user":"notarealuser","app":"index","method":"POST","url":"/login","message":"An exception occurred while executing a query: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'user_id' in 'where clause'","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/113.0","version":"26.0.2.1","exception":{"Exception":"OC\\DB\\Exceptions\\DbalException","Message":"An exception occurred while executing a query: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'user_id' in 'where clause'","Code":1054,"Trace":[{"file":"/config/www/nextcloud/lib/private/DB/QueryBuilder/QueryBuilder.php","line":295,"function":"wrap","class":"OC\\DB\\Exceptions\\DbalException","type":"::"},{"file":"/config/www/nextcloud/lib/public/AppFramework/Db/QBMapper.php","line":276,"function":"executeQuery","class":"OC\\DB\\QueryBuilder\\QueryBuilder","type":"->"},{"file":"/config/www/n" while reading response header from upstream, client: 47.61.125.213, server: _, request: "POST /login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "mynotrealhost.com"

     

    I dont know what to do at this point, to be honest.

  2. 29 minutes ago, mgutt said:

    Buy a new mainboard. You need a C246 or W480 Board with 8 SATA ports. By that you can remove the inefficient sas card and profit from the more efficient hardware (CPU, DDR4, chipset etc).

     

    Yes, I know. But at the moment I dont really want to spend more money. I think the ROI will be very high.

     

    29 minutes ago, mgutt said:

    The graphics card is used for a VM?

     

    Yes

     

    31 minutes ago, mgutt said:

    Why? My HDDs are standing still >90% of the day. Sounds like you need a bigger NVMe. Mine has a size of 2TB to avoid using the HDDs even for Nextcloud files.

     

    This is my first unraid build and I messed up with my space distribution/shares. Basically I need to move 1 of the disk in the unraid to an unassigned device, because disk 5 is ONLY use for seeding torrents. but right now it also implies parity to be UP and a lot of reads in both disks. The problem is that I think I need a disk to replace disk 5 before moving it outside the array so Im delaying this a bit.

     

    33 minutes ago, mgutt said:

    Of course an additional option. But saves only up to 1.5W per HDD (spun down).

     

    Well, in this case again not worth the expense IMO.

  3. Wow, I'm surprised how you guys manage to get 20w-30w idle servers.

    I'm going to ask about recommendations on my setup in case Im missing something.

     

    - 4690K

    - Asus ROG Maximus VII Gene Z96

    - 4x4GB DDR3
    - 8xHHDs
    - 1xNVME

    - 1xLSI SAS2008

    - 1x1060 (idle 7w)

    - Corsair RM650x

     

    currently 66W at idle. (im running several docker containers but 99% of the time they are not doing nothing). Only 2 disks spinning.

    - C states seems to be working fine

    - I have already run powertop --auto-tune

     

    I can change my 4x4GB to 2x8GB, and I have 3 small disk that can be replace for 1 big hd. Do you think there is something more I can do to lower my power consumption?

     

    Thanks in advance.

     

     

  4. Hello,

    I have a 1060 GTX connected to unraid and a Windows 10 VM that I use from time to time to play some games.

     

    The thing is in an unraid fresh boot, GPU is correctly detected and I can run a script to force P0 State (power consumption 6W and fan off) but after I have run a start/stop with the VM, the graphic card is no longer available so it keep with the fan ON and, I guess, consumption is way more elevated.

     

    Is there any way to fix this in 6.10.3? Do I have to restart Unraid everytime I turnoff the VM to recover the low consumption or it is possible to somehow restart the nvidia driver or similar through a script? I only turn on that VM once in a month or less.. so it is important to me to save some electricity.

     

    Thanks in advance!

  5. Hello guys,

     

    Yesterday I posted that I had a problem with my unraid because my dockers can't mount in a read-only filesystem. After checking the log, there was a problem with my cache drive in BTRFS and I read that the scrub doesn't fix the files so someone in the unraid forum I read something like ''just format in XFS and format again in BTRFS and it will be fixed''.

     

    Well, guess what, it is fixed but all my dockers configurations were screw! Its not a big deal, onyl a few hours of work so it could be worse. But two questions came to my mind:

     

    1) I really though that cache drive were erased and reloaded every system reboot so my though was: Ok I reformat and reboot and everthing should be back again. (I though config files were stored in my USB). This is not happening so I guess I was wrong and files are always stored in the cache file even when the computer is off, right?

    2) Second question is: is there any automatic way to create backups for my unraid cache just in case this happens again?

    3) Knowing that BTRFS CAN'T be fixed with scrub and requires more complex tools.. doesnt make sense to attach to XFS even for the cache drive?

     

    Thanks in advance for your help.

  6. Hello guys,

     

    There was a blackout last night and today Im having some troubles with my Unraid server. My OpenVPN container is not working, as the same as others, because I'm having a 'file system read-only' error and I have only been able to stay 1 hour or less at home so, until tomorrow, this is what I have seen:

     

    All my disks are green and with no errors (1 of them has some UDMA Errors but has been on 50 errors the last 8 months). I started a parity check with corrections to see if the problem was fixed but later I have been reading this https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui and maybe its what I need to do. The question is: do I need to check every hard drive file system? or there should be something in the log (which log exactly?) that tells me which drive(s) are bad?

     

    it is a completely mess that I can't access my openvpn because of this! Im seriously thinking about setting up a raspberry pi just it case this happen again.

    Sorry and thank you for your help!

  7. Nice!

     

    So can I run headless unraid and past through also mouse and keyboard to VM right? the only 'drawback' is that I have to start the VM through the server gui through the phone as you said.

     

    Because, what happens if I have the VM running all the time EVEN if It has no use? will it down the performance of the server? I mean, the passthrough cores and the ram are blocked by the VM and cannot be used by the sever while the VM its running?

     

    Last question: If I run unraid headless, can I passthrough the onboard graphics card to another VM?

    Thanks for your information!

     

     

  8. Hello guys!

     

    After some months setting up my Unraid Server with its multiple dockers and harddrives I created my first VM. It was a Windows 10 for testing some stuff and it works great. I dont have a very powerfull server but it does the job.

    The thing is that, after watching a youtube video of ''Linus Tech Tips'' where he mounts and unraid with 2x2080 to have 2 gaming setups, I want to try something ''similar''.

     

    I want to try to install a 1060 6Gb I have around and use that VM as an option for gaming when I'm in this house. So the question is, when you have a setup like that, you start your computer so unraid load, then you open the VM through the unraid web gui? and thats how you work and how you game? Or there are other programs to run the VM more 'transparently'? For example, I was thinking that Unraid GUI doesn't have any sound, if you passthrough a nvidia card.. will you have sound through that VM?

     

    Thanks in advance!

     

     

  9. Hello

     

    Its my first time setting up a OpenVPN server so I don't know if this is the correct way of working or I'm missing something.

    I have followed the Spaceinvaders guide and my VPN is working. I have tested it with my mobile phone using 4G and it works perfect. The problem was that I was at the office this morning and want to test it but I didn't have the profile so I tried to access to mydomain:943 and It couldn't be reached. I 'opened' the port just in case and still not working. So:

     

    1) Is openVPN GUI only accesible through LAN for security reasons?

    2) In this way, should I keep a copy of the profile somewhere in the cloud (my nextcloud, for example) in case this happen again to me?
    3) In the case OpenVPN can be accesed through wan, what I'm missing? I tried the port in a hurry, but I'm using nginxproxymanager.

    Thanks in advance.

  10. On 7/19/2020 at 10:08 PM, watchmeexplode5 said:

    @Yeyo53

     

    Long post but I'll try to help.

     

    Two questions so I know what your goal is and can try and get you a good setup:

    • Do you want all your files (movies/tv) in your "Plex" share to eventually be moved to your gdrive? Or are you going to keep some files local and some on the cloud?
    • Do you seed files for a LONG time or just to meet ratio/hit-and-run rules (like 7 days seed before you delete it from your torrent client)?

    ------------------------------------------

    ------------------------------------------

     

    With regards to your other questions:

     

    Should I use unassigned device for torrents

    • That's up to you. I would use unassigned device for seeds to avoid parity writes OR use your cache. But that of course would make those files unprotected (likely not an issue for torrents). Consider keeping the seeds on your cache drive if it's big enough and that will avoid excessive spin ups.

    Cache set to YES on "Plex" 

    • Yes you are correct. With cache set to yes, anything written to the "Plex" share will be placed on your cache drive (if there is enough space). When unraid runs the mover script it will write them to the disks on your array.

    mount_rclone mounted in storage

    • One question, what do you mean "Storage"?
    • The above config I posted should place the mount at /mnt/user/mount_rclone. If you go there it will be the contents of your gdrive. Free-space/used-space reported there will likely be wrong. Just know that it's your gdrive mount and don't worry about what space is being reported. 

    Local in Torrent/gdrivedownloads

    • This should be your "local" content that is pending upload to gdrive. The upload script will move things in "local" to your gdrive. So it doesn't make to have that set as Torrent/gdrivedownloads in my opinion.
      • I would keep it /mnt/user/local. That works best with these scripts.
    • To stop excessive spin ups, use your cache drive (if it's ssd) for /mnt/user/local... or you could map it to your unassigned drive if you want. 
    • Your torrent client should be set to download to something like /mount_mergerfs/downloads/"whatever-you-want" (something like /torrents)
      • This setup will insure you can maintain hardlinks with torrent

    Regarding mount_mergerfs

    • Your mount_mergerfs mount shouldn't be placed in your "Plex" share. The setup I gave you earlier combines your local, rclone, and Plex share into a single place for Sonarr/Radarr/Plex/torrents to use. That way any file located on local, rclone, or "Plex" folders will appear in the mount_mergerfs. So it doesn't matter if the file is local or on gdrive, your programs will always see it in the mergerfs folder. Think of it like shortcuts. The file isn't actually on your mergerfs share but it's "linked" to that so programs can't tell the difference. 

     

    Docker Mapping --> Very important for Torrent Hardlinks!

    • You should be able to map all your docker programs to something like: (Host Path) /mnt/user/mount_mergerfs ---> (Docker Path) /mount_mergerfs 
      • Torrents will download to /mount_mergerfs/downloads/torrent_dl_folder 
      • Sonarr will find them and move it to /mount_mergerfs/tv
      • Plex will scan them and play them from /mount_mergerfs/tv

     

     

    Before you go and move your entire Plex folder to gdrive

    • Test it by taking files from your "Plex/movie/Folder-File" share and move them to  "/mnt/user/local/Movie/Folder-File". Then run your upload script. If you look in your mergerfs share, nothing will have changed because it's all linked together, that's normal! When you look in your mount_rclone folder you will see your new Folder-Files (because they are now on gdrive).

    At first, thank you for answering. I gonna try to clarify some points:

    1-Do you want all your files (movies/tv) in your "Plex" share to eventually be moved to your gdrive? Or are you going to keep some files local and some on the cloud? No. I'm having two different folders: one for 'special movies' I want to keep local in /mnt/user/Plex/movies and other with 'less special' movies in '/mnt/user/Plex/mount_mergefs'. Then I add these two folders in the same library in Plex so I see all the movies together.

    2- Do you seed files for a LONG time or just to meet ratio/hit-and-run rules (like 7 days seed before you delete it from your torrent client)? Well, it depends, but could be a month or two.


    3- That's up to you. I would use unassigned device for seeds to avoid parity writes OR use your cache. But that of course would make those files unprotected (likely not an issue for torrents). Consider keeping the seeds on your cache drive if it's big enough and that will avoid excessive spin ups.
    I don't really care about protection in those files, and my cache drive is 1TB so it could handle. I like this idea more than the unassigned device, but IDK. The thing is that if radarr/sonarr are moving files just after the download completes from /mnt/user/Torrent/downloads (disk3) to /mnt/user/Plex/movies(disks 4,5,6,7) it means that files are being seeded for the Plex user share so those disks are being spinned all the time, right? Maybe I should look for an option to tell sonarr/radarr to DON'T move the files after 1month or something like that?

    4- Regarding the gdrive folders: I put the mount_mergefs in /mnt/user/Plex/mount_mergefs because i have my local folder of movies in the same place (/mnt/user/Plex/movies) so mount_mergefs is the folder inside my plex user share where movies in gdrive are (both yet not uploaded and already uploaded). as you said this is only a shortcut to the other two folders. In this case when radar downloads a movie it moves it to '/mnt/user/Plex/mount_mergefs' which in reality is my local gdrive folder located in /mnt/user/Torrent/gdrivedownloads + the RcloneMountShare which is a link to my gdrive, right? but imagine that theres is no new downloads so all my files are in GDRIVE currently. Now I download a new file (/mnt/user/Torrent/downloads) so disk 3 is spinned up. Then in completes and radarr moves it to /mnt/user/Plex/mount_mergefs so one of the disks related to that user share (disks,4,5,6,7) are spinned up. but /mnt/user/Plex/mount_mergefs in reality is a link to /mnt/user/Torrent/gdrivedownloads so another time disk 3. And then file is uploaded to gdrive and now is ''''virtuallly?''' in /mnt/user/Storage/mount_rclone so disk 2 is spinned up?

    I know, I know. I'm just making things complicated. I really like to keep things organized so for me it makes sense thats 'gdrive movies local&notlocal' (aka mergefs) are ''located'' in the Plex user share. And it also makes sense that my 'gdrive' remote (aka mount_rclone) is the Storage user share where my 'personal files' are (nextcloud).

    Hope I have clarified something! thanks in advance!

  11. On 7/11/2020 at 11:05 PM, watchmeexplode5 said:

    @Yeyo53

     

    These are the settings I would recommend for starting out. Mostly default but adapted to work for your Plex mount. Keeping things default also makes initial setup and support easier!


    Using gcrypt pointing to your gdrive

    
    RcloneRemoteName="gcrypt"
    RcloneMountShare="/mnt/user/mount_rclone"
    LocalFilesShare="/mnt/user/local"
    MergerfsMountShare="/mnt/user/mount_mergerfs"
    DockerStart="transmission plex sonarr radarr"
    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="/mnt/user/Plex/"

     

    So your gdrive will be mounted at .../mount_rclone. 

    Your local file at .../local (to be moved to gdrive on upload)

    I added your mnt/user/Plex/ folder to the localfileshare2 setting for mergerfs to see.

     

    The merged folder will be at .../mount_mergerfs

    • If you go to .../mount_mergerfs you will have all your paths combined so your gdrive, your ../local, and your /Plex files will all be there. 
    • When you write/move/copy things to .../mount_mergerfs it will be written to /mnt/user/local/
    • When you run the upload script. Anything in .../local folder will be uploaded to your gdrive.

     

    So with this configuration you should point Plex/Sonarr/NZBGet to "/mnt/user/mount_mergerfs"

    It will still see your media that's in your /Plex folder because it's added to localfileshare2.

     

    This setup will keep your /Plex folder untouched while you make sure everything works well. If you want to move portions of your /Plex folder to your gdrive. Simply move files from /mnt/user/Plex to /mnt/user/local (or /mnt/user/mount_mergerfs). Then run the upload script. 

     

    On 7/11/2020 at 10:42 PM, watchmeexplode5 said:

     

    @Yeyo53

    Do you plan on moving your local plex files to the cloud? Or keeping some files local and some in the cloud?

     

    To start off, I wouldn't use the rclone cache system if you don't have to. In my tests, I haven't seen any performance gains from it compared to the scripts listed here. 

    I recommend using just your a remote pointing to your gdrive and a crypt pointing to that gdrive.

     

     

    Here is an explanation of the commands that I think you are struggling with:

     

    RcloneMountShare="/mnt/user/mount_rclone" 

    • This is where your gdrive will be mounted on your server. So when you navigate to /mnt/user/mount_rclone you will see the content of your gdrive. In your case it sounds like your will see your two folders which are "media" and "movies" 

     

    LocalFilesShare="/mnt/user/local"

    • This is where local media is placed to be uploaded to the gdrive when you run the upload script. This is where you will have a download folder, a movie folder, a tv folder, or any folder your want. 

    MergerfsMountShare="ignore"

    • If your fill this in it will combine your local and gdrive to a single folder. So lets say you set it as /mnt/user/mount_mergerfs. 
    • These files do not actually exist at that location but simply appear like they are at this location. Here is a visual example to help  
    
    /mnt/user/
          │
          ├── mount_rclone (Google Drive Mount)
          │      └── movies
          │            └──FILE ON GDRIVE.mkv
          │           
          ├── local
          │     └── movies
          │           └──FILE ON LOCAL.mkv
          │
          └── mount_mergerfs
                └── movies
                      ├──FILE ON GDRIVE.mkv
                      └──FILE ON LOCAL.mkv 

     

    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

    • These are the folders created in your LocalFileShare location. The folders here will be uploaded to gdrive when the uploader script runs (except the download folder. The uploader ignores that folder). 
    • So typically it's best to leave them as the default value. You can always make your own files there if you want

     

    Sorry guys! I have been busy this week.

     

    I just accomplished it! The main problem was in my cache remote (I read it in a spanish forum that it was better for plex but I have try with only the crypt and it is working fine so...)

     

    Anyway I would like to ask because I think I can optimize it a little bit but it is hard to explain:

    At first Im pretty new to unraid so maybe I am doing things wrong.

     

    - I have 1 user shared called Torrent that use disk 3 (cache: no) for downloads (first question: should I use an unassigned device for torrent downloads to avoid writing all the time in parity?)
    - Then I have a user shared called Plex with 4 HDs (disks 1,2,5,6) where movies and tv shows are and where downloads are moved after complete (radar&sonarr do this). Cache is set to YES so I guess sonnar and radar move it to cache drive and then the mover move it sometime to the final destination. am I right?
    - Then I have a user share Storage (disk 4) for nextcloud.

    So:

    - I have mounted 'mount_rclone' (AKA gdrive) in Storage (but there is no real info there, right? I mean no space is occupied?)
    - I have mounted 'local' in Torrent/gdrivedownloads (which I think I did wrong because anyway my transmission is dowloading to Torrent/download but keep reading)

    - I have mounted 'mount_mergefs' in Plex user shared.

    So files are downloaded through sonarr&radarr to 1)Torrent/download/incomplete 2)Torrent/download/complete, then 3) they move it to 'Plex/mount_mergefs/gdrivemovies' (but phisicaly they are 'occuping space' in Storage/mount_rclone AKA disk 4.

     

    Is it weird or it makes sense? What I want to avoid is 'spinning disks' for nothing and try to save some power and so because Plex is not used until night (but if I'm sharing a complete with transmission... which have been moved to Plex but it '''is''' stil at Torrent/Downloads... which disk is active??)

    SORRY for the long post but my head is a mess right now! thanks in advance if you read it!!

  12. Hello!

    Im trying to mount my gdrive in unraid but Im facing some problems. I'm not a native english speaker so maybe thats the main problem haha

     

    I have created 3 remotes in rclone. one called gdrive with connect to  my gdrive, one called gcache which points to 'gdrive:media' and one gcrypt which points to 'gcache:'. I think thats ok.

     

    Now I have to create a user script with the rclone_mount script but I'm seeing all the folders at the beginning and that's where I'm getting lost.

     

    I have a user share (calle Plex) with 3 disks.  I have a folder called 'movies' inside that share, so /mnt/user/Plex/movies. I want to create another folder called gdrivemovies so /mnt/user/Plex/gdrivemovies. So, in my case:

     

    RcloneRemoteName="gcrypt"

    RcloneMountShare=  ???

    LocalFilesShare="/mnt/user/Plex/gdrivemovies"

    MergerfsMountShare="ignore"

    DockerStart="transmission plex sonarr radarr"

    MountFolders=these should be folders inside my gdrive? I mean I have two folders one called 'media' (which I think I need for the cache)  and one 'movies'. Do I  need more?

    Thanks in advance and great work.

     

  13. Hello,

    I have an unlimited Gdrive account that I would like to use with Unraid. I have seen the guide and so, but I haven't started to mount anything yet because I'm wondering how I'm going to use it.

     

    I'm planning to have enough local space because I like to keep things local, but I was thinking: what if I setup another Plex library in the gdrive with rclone and that stuff and put there 'less important' things? Like TV Shows and movies I don't mind to lose. I have a friend that has all its library in GDrive and he says that even 4K is well played.

     

    The only problem, which it isnt a big deal but may could be solved, it is that I should have 2 plex libraries right?. Is there anyway to merge GDrive with a user share in Unraid and select which ones are going to be in gdrive and which aren't? Maybe if I put a folder ('Gdrivemovies') inside my '/movies/'  folder (my actual plex library) plex will only read it as one?

     

    Maybe some of you have solved this problem or can give me some ideas about this

    Thanks in advance!

×
×
  • Create New...