Jump to content

extrobe

Members
  • Posts

    145
  • Joined

  • Last visited

Posts posted by extrobe

  1. 2 hours ago, Squid said:

    You should post your diagnostics before you reboot

     

    Thanks Squid - I eventually found the very useful Archived Notification menu where it gave me some more detail.

    I found...

    Quote

    Cache 2 - SAMSUNG_MZ7WD480HAGM-00003_S16MNYAF107743 (sde) - active 40 C (disk is hot) [NOK]

     

    So was indeed the temp setting off the failure. I've upped the WARN threshold to 50c, so that should sort it out (it's never less that 38/39, so doesn't take much to get it to 40)

    • Like 1
  2. I frequently wake up to a status message on my server stating

     

    Quote
    unRAID Status: 15-02-2018 00:20
    Notice [DEMETER] - array health report [FAIL]
    Array has 20 disks (including parity & cache)

     

    However, I can't see what the issue is. Some days it's reporting fine - about 50/50

     

    Parity is valid, and last checked a few days ago.

    When I run fix common problems, there are no errors or warnings

     

    Only thing I can think of, is that when mover is running, the cache drive sometimes gets warm (40c-42c) - could this be triggering the FAIL notification?

  3. So, after 3 solid days of trying to get this working - I finally worked out what the underlying issue was - both for being able to 'claim' my original server, form sharing libraries, and remote connections. My MTU size was defaulting to 1500, but was evidently too high! Changed it to 1484 (well, used MSS Clamping instead) and instantly worked.

    Yikes!

  4. OK - could do with a bit of help on this bit if anyone is able to.

     

    I can't currently get remote access to work - but my hunch is that it's my unraid network interface setup

     

    Remote access just won't succeed - I'm using Unifi router, with a modem in bridge mode, so there should not be a double-NAT issue.

     

    In Plex, i've set 'manual port' to 32400 (I've tried letting uPnP take care of it, and I've tried an alternative port - same results)

     

    In Unifi, this port is forwarded to the local IP

     

    if I browse to [public facing ip]:32400/web, the plex web interface successfully loads - so I think that rules out a forwarding / nat issue

     

    I also have (had) another plex install on another machine, which worked just fine, including remote access. So reckon it's something local to the docker/server

     

    I've attached my eth0 setup - it was a bonded connection (bond0) which I hastily turned back to a regular one when I was having a different problem which I thought might be network related, and wonder if I've done something funky which is blocking communications.

     

    I'm unsure if the logs can help point towards the issue, but I get a little lost sifting the the logs - not really sure what I'm looking for!

     

    unraid-net.PNG

    plex2.png

    unifi.PNG

  5. 13 hours ago, trurl said:

    Have you gone there yet?

     

    Yes - but didn't manage to get a response.

     

    In the end, I backed up the appdata folder, create a fresh Plex install and copied the plugin support, media & metadata folders. This seemed to sort it out.

     

    Still have a couple of issues that I might need a hand with (can't get remote access to work, but could be an Unraid network config issue), and problem sharing libraries - but I'll do a bit more troubleshooting before I pop back!

  6. Still having issues getting into Plex. I can claim / unclaim - but as soon as I claim, the server becomes unavailable.

    - I've tried removing recommended fields from the preferences.xml file

    - I've tried removing the entire preferences.xml file and going through the basic setup process again

    - I've tried removing the docker.img file

    - I've gone ahead with the 6.3.5-->6.4.1 unRaid update

     

    Occasionally (depending on what I click on once I've tried claiming), I do get the message...


     

    Quote

     

    No soup for you!

    The server you're trying to access doesn't want to let you in. Make sure you're signed in as a user with access to this server.


     

     

  7. Hi guys, after bit of help - thought I'd try here before heading over the the Plex forums.

     

    Just started my server up after 6 months in storage. Everything is fine, but Plex isn't seeing the server. I can fire up [ip]:32400 but I just get taken to one of my other plex servers and unPlex doesn't appear in the list

     

    The only workaround I can find is to go to devices-->server where it does list the unPlex device as being recently active. If I remove it, restart the Plex docker - bam, I can see the server and it prompts me to claim it.

     

    However, as soon as I claim it, it disappears again and I have to repeat the process.

     

    Any thoughts on what I can do to stop the cycle?

     

    Thanks!

  8. There's no existing docker in the CA store for R / R Studio, but there are generic Dockers on Docker.com

     

    I went through the process of getting this working with unRaid, and wanted to share how I achieved this.

     

    **This is a work in progress!**

    I'll confess to not really knowing what I'm doing, therefore, some bits may not be quite how they should / could be, but this should get you to a working R Studio Environment. There are a few bits I either think I need / know I want to change - I'll outlines these as well.

     

    Prerequisites

    • Already have the Community Applications plugin installed

     

    Outstanding:

    • Create a pre-defined XML file (including web gui link & icon)
      • Web gui link now added
      • Icon url now added
    • https proxy via letsencrypt/nginx *turns our https is a R Studio Pro feature only :(
    • Understanding if the Packages installation directory should be / needs to be a user-defined directory
      • Some progress made on this

     

    Outline of the process:

    1. Enable Community Applications to search Docker.com
    2. Create the share/directory
    3. Find the right R Studio Docker
    4. Configure the docker template
    5. access the web Ui

     

    Detailed Process

    1. Enable Community Applications to search Docker.com

     

    • from unRaid, select the Settings tab
    • Under Community Applications banner, select General Settings
    • Near the bottom, find the entry stated 'Enable additional search results from dockerHub?'
      • Change to 'Yes'
    • Apply
    • Done

     

         2. Create the share/directory

     

    • Create a share or directory dedicated to your R files / Projects. If you have an SSD cache drive, you may want to utilise it, so I would suggest a dedicated share set to use the cache drive only. I used a dedicated share, name r

     

         3. Find the right R Studio Docker

             

    There are a variety of R Studio Dockers available. The best ones appear to link back to the Rocker Project. Within the Rocker project, there are various dockers, but I'm using 'tidyverse', which includes the base R code, R Studio and a good selection of the most popular R library's available already added to the docker.

     

    • From the Apps tab, search 'tidyverse'
      • you should get no results back
    • select 'Get more results from DockerHub'
    • locate the docker 'tidyverse' from the author 'rocker'
    • select 'add'

     

         4. Configure the container

     

    You should now be at the container configuration screen. We need to manually map the ports & paths to the container

     

    • Give your docker a name (if you wish)
    • select 'Add another Path, Port or variable'
      • Config Type: Port
      • Name: HTTP Port
      • Container Port: 8787
      • Host Port: 8787
      • connection Type: TCP
      • Add
    • Repeat above step to add another port
      • Config Type: Port
      • name: Shiny Port
      • Container Port: 3838
      • Host Port: 3838
      • Connection Type: TCP
      • Add
    • Add another item, but this time a path - this will hold your user-content and workspace session files
      • Config Type: Path
      • Name: Workspace
      • Container Path: /home/rstudio
      • Host Path: /mnt/user/r/ (or whatever share/directory you want to use)
      • Add
    • Add another path item - we will use this space as an install directory for extra library's you add
      • Config Type: Path
      • Name: Custom Library Install Path
      • Container Path: /usr/local/lib/R/custom-library
      • Host Path: /mnt/user/appdata/rstudio
    • At the top-right of the docker add screen, there's a toggle switch that says 'Basic View'.
      • Click this to go to advanced view
    • Where it says 'WebUI', enter http://[IP]:[PORT:8787]/
    • Where is says 'Icon URL', enter https://cdn.rawgit.com/extrobe/un-r/c7b98d12499aef04180a6bd4f18c77dd2c1155bb/rstudio-icon-s.png

     

    • Apply

     

    this should now pull down and install the docker

     

         5. Access the webUi

     

    • Browse to http://<yourip>:8787
    • Login using rstudio/rstudio
    • run the R command .libPaths( c( "/usr/local/lib/R/custom-library",.libPaths()) )
      • This adds your custom folder to the list of library directories, and sets it as the default. You may want/need to keep this to run as part of your regular R code to ensure the additional libraries are always available.

     

    there you have it. Your workspace files should all save into your /mnt/user/r share.

     

    I assume the directory where the package files get saved / installed to should be a mapped directory as well. I tried setting this by mapping /usr/local/lib/R to the appdata folder, but this broke it. I'll have another look at it at a later date!

    That said, I've installed a couple of extra packages, restarted the server and so far they've been persistent.

     

    The underlying Docker File does not appear to provide support to install the config/library files to a custom location. Therefore, any packages we add and any settings we change will not be persistent (they won't always get overwritten, but can / does happen, eg with a new docker file being released). We get around the library/package install location by adding a custom library directory.

  9. 1 minute ago, Diggewuff said:

    I rather use an older, working version of software than using the newest version only working with a workaround, witch is not recomended by the developer and maybe not working anymore when the next update is coming.

     

    Wasn't particually suggesting it as a working around - just trying to offer some perspective that whatever has caused the issue is reversible, as it sorted itself out for my without having to rollback to an earlier version.

     

    (ps, I'd have also updated the config.php file to add the new IP address to the trusted domains. Those two tweaks are the only things I changed in either docker)

    • Upvote 1
  10. For what it's worth, I was having the same issue.

    Coincidentally, I was going through the process of changing the IP address (and Subnet) for the server, and updating the dockers etc accordingly.

     

    After I updated the NGINX conf file for nextcloud and restarted the letsencrypt docker, everything started working again - and this has continued to be the case since updating both the LE & NC dockers as well.

     

    Edit: I also updated the config.php file in nextcloud to add the new IP to the trusted domains list

    • Upvote 1
  11. Funnily enough, I've just upgraded too, and also having issues getting it working again.

    Slightly different errors from yours mind you - mine appears to start fine, but when I log in I get;

     

     [24/May/2017:22:32:04] ENGINE Error in HTTPServer.tick
    Traceback (most recent call last):
      File "/opt/sabnzbd/cherrypy/wsgiserver/__init__.py", line 2024, in start
        self.tick()
      File "/opt/sabnzbd/cherrypy/wsgiserver/__init__.py", line 2091, in tick
        s, ssl_env = self.ssl_adapter.wrap(s)
      File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 67, in wrap
        server_side=True)
      File "/usr/lib/python2.7/ssl.py", line 363, in wrap_socket
        _context=self)
      File "/usr/lib/python2.7/ssl.py", line 611, in __init__
        self.do_handshake()
      File "/usr/lib/python2.7/ssl.py", line 840, in do_handshake
        self._sslobj.do_handshake()
    error: [Errno 0] Error

    If I go into config, it's set to listen on port 8080 - even though it's actually on port 8085 - changing this back to 8085 doesn't fix it either. Also tried unticking https (as I don't use it), but this a) didn't change anything, and b) didn't seem to save the setting anyway.

     

    If I look at the logs, it's the same errors as above - no other errors, but various references to starting up the server on port 8090

     

    On my config page (winthin SabNZB), it states the parameters are

    /opt/sabnzbd/SABnzbd.py --config-file /config --server 0.0.0.0:8080 --https 8090

     

  12. 18 minutes ago, CHBMB said:

    That's not the recommended way to run Nextcloud.  See the guide on our website.

     

    Just going through that now - it was my original preference to use dedicatedsubdomain.mydomain.co.uk - only all the guides I found seemed to use this /nextcloud method

     

    Is there any reason I should go back and redo the original mariadb & nc setup? It works fine in itself - think it was your tutorial on linuxserver.io i followed in the first place (just didn't go as far as getting apache working at the time)

  13. Hi, after a little bit of help on the config side - but happy to go to a more specific NC group if I'm better off asking there...

     

    I've been trying to get SSL / HTTPS access working, so I can access via oc.mydomain.com/nextcloud.

     

    I'm using the letsencrypt docker, and been broadly following this walkthrough.

     

    As it stands, I've got HTTPS working, and I can browse to https://oc.mydomain.com/nextcloud and it connects fine

     

    However, the desktop client now can't connect - on either the original internal address, or the new https address

     

    oc.png

     

    The only address I can get to work in the sync client is 

    https://192.168.0.208:444/nextcloud

     

    The error suggest it's looking for the folder/file /owncloud/status.php. What is a little odd, is that whilst I understand that NC is a fork of OC, I can't see that owncloud folder myself anywhere.

     

    My nginx config (from the letsencrypt docker) is ...

     

    	location /nextcloud {
    		include /config/nginx/proxy.conf;
    		proxy_pass https://192.168.0.208:444/nextcloud;
    	}

    and in nextcloud\nginx\site-confs\default , I changed the install location (as per the walkthrough)

      # Path to the root of your installation
      #root /config/www/nextcloud/;
      root /config/www;

     

    I also updated the config.php file to add the below, but didn't seem to impact anything before or after I changed it (I'd already added oc.mydomain.com as a trusted domain)

      'trusted_proxies' => ['192.168.0.208'],
      'overwritewebroot' => '/nextcloud',
      'overwrite.cli.url' => '/nextcloud',
      #'overwrite.cli.url' => 'https://192.168.0.208:444',

    Have I missed a step out somewhere?

     

    I also tried this - which didn't make any difference either way (could still access via web browser fine)

    	location /nextcloud {
    		include /config/nginx/proxy.conf;
    		proxy_pass https://192.168.0.208:444;
    	}

     

    My hunch is that I need to change the value of the 'overwritewebroot' value, but tried a few options here to no positive effect

  14. hi, I'm having a spot of trouble getting this working.

    I can get everything setup correctly (I've gone with a bit of file shares and a bit of files in existing shares), but as soon as I start the service, I get told I'm under attack and immediately shuts down the SMB service.

     

    It looks like one of the shares is triggering, but unsure why - I've tried a couple of times now.

     

    If I only use the file bait service, it seems to run fine.

    May 18 13:12:34 DEMETER root: ransomware protection:Creating bait files, root of shares only
    May 18 13:12:34 DEMETER root: ransomware protection:Creating Folder Structure
    May 18 13:12:35 DEMETER root: ransomware protection:Total bait files created: 40
    May 18 13:12:35 DEMETER root: ransomware protection:Starting Background Monitoring Of Bait Files
    May 18 13:12:43 DEMETER root: ransomware protection:Creating Bait Files
    May 18 13:13:25 DEMETER root: ransomware protection:Bait Files Created: 54350 (1294/second) Completed: 16%
    May 18 13:13:59 DEMETER root: ransomware protection:Bait Files Created: 108500 (1427/second) Completed: 33%
    May 18 13:14:37 DEMETER root: ransomware protection:Bait Files Created: 163000 (1429/second) Completed: 50%
    May 18 13:15:13 DEMETER root: ransomware protection:Bait Files Created: 217250 (1448/second) Completed: 66%
    May 18 13:15:47 DEMETER root: ransomware protection:Bait Files Created: 271100 (1473/second) Completed: 83%
    May 18 13:16:25 DEMETER root: ransomware protection:Bait Files Created: 325300 (1465/second) Completed: 100%
    May 18 13:16:26 DEMETER root: ransomware protection:Starting Background Monitoring of Baitshares
    May 18 13:16:51 DEMETER root: ransomware protection:ATTRIB,ISDIR
    May 18 13:16:51 DEMETER root: ransomware protection:..
    May 18 13:16:51 DEMETER root: ransomware protection:Possible Ransomware attack detected on file /mnt/user/xFileBater-fell/
    May 18 13:16:51 DEMETER root: ransomware protection:SMB Status:
    May 18 13:16:51 DEMETER root: ransomware protection:
    May 18 13:16:51 DEMETER root: ransomware protection:Samba version 4.5.7
    May 18 13:16:51 DEMETER root: ransomware protection:PID Username Group Machine Protocol Version Encryption Signing
    May 18 13:16:51 DEMETER root: ransomware protection:----------------------------------------------------------------------------------------------------------------------------------------
    May 18 13:16:51 DEMETER root: ransomware protection:23510 extrobe users 192.168.0.93 (ipv4:192.168.0.93:50187) SMB3_11 - partial(AES-128-CMAC)
    May 18 13:16:51 DEMETER root: ransomware protection:
    May 18 13:16:51 DEMETER root: ransomware protection:Service pid Machine Connected at Encryption Signing
    May 18 13:16:51 DEMETER root: ransomware protection:---------------------------------------------------------------------------------------------
    May 18 13:16:51 DEMETER root: ransomware protection:backups 23510 192.168.0.93 Thu May 18 13:14:53 2017 BST - -
    May 18 13:16:51 DEMETER root: ransomware protection:
    May 18 13:16:51 DEMETER root: ransomware protection:No locked files
    May 18 13:16:51 DEMETER root: ransomware protection:
    May 18 13:16:51 DEMETER root: ransomware protection:Deleting the affected shares
    May 18 13:16:51 DEMETER root: ransomware protection:Deleting /mnt/user/xFileBater-fell/
    May 18 13:17:15 DEMETER root: ransomware protection:Starting Background Monitoring of Baitshares
    May 18 13:18:54 DEMETER root: ransomware protection:Resetting SMB permissions to normal per user selection
    May 18 13:19:52 DEMETER root: ransomware protection:Stopping the ransomware protection service (15743)

     

  15. Managed to this this on Asus's website, and seems to be the only official word I could find on the matter - but suggests the 2nd is only for very heavy usage scenarios, and 1 should be fine.

    I'll go with the PSU as-is i reckon - thanks for the help getting to the bottom of it!

  16. 3 minutes ago, jonathanm said:

    Apparently you can't use the six pin connector that is meant for directly connecting to the graphics cards, it's different. There are supposedly adapters available though.

     

    because using the same cable would be too easy ;) 

  17. 4 minutes ago, bman said:

    Nice motherboard!  I got the non-IPMI version for a recent build that I haven't quite gotten around to yet.

     

    In my previous experiences, though, I have learned to connect ALL motherboard power connectors no matter how many there are. I've had it happen before where I failed (or thought I wouldn't need that extra 12V feed) only to scratch my head a year later when a new add-on card wouldn't work, or some other silly problem I could have avoided.

     

    Each 12V wire and each circuit trace on the motherboard are designed to carry only so much current.  If there are extra power connectors, it's probably because in some situations the extra current will need to be available to certain slots on the board.  Plug in whichever connectors fit. They're keyed so you can't mess them up, no matter how they're labelled.

     

    Thanks - I only have 1 8-pin cable on the PSU. Now a little concerned that I need a different PSU, with some suggestions I've found (nothing all that concrete mind you) stating that some mobo/cpu combinations need the extra 12v. I'm using a Xeon E5-2603 v3 - so nothing all that demanding.

  18. Feel like a bit of a dunce for asking this, as I've built many computers in my time - but want to check which cables should be plugged where on my motherboard - as my motherboard seems to have an abundance of PSU ports!

     

    My motherboard (Asus X99-WS/IPMI) has...

    24 pin ATX12v - no problem here

    8 pin ATX12V - I recognise this one as the 4+4 split connector

    8 pin ATX12V1 - appears to be exactly the same as the ATX12V

    6 pin ATX12V_1 - no idea

     

    So what are these extra ones for? My PSU has 2x PCIe power connectors (in 6+2 config), so wondered if perhaps one (or both) of them were designed to take an additional 12v feed, but the port looks like it's a different setup from the PCIe connector.

     

    Only other thought I had was if ATX12V1 was for a second PSU - but that doesn't explain ATX12V_1

     

    PSU is a Seasonic G550. It won't have any graphics cards, but will have 2x H200's, and 12 drives.

     

    thanks!

     

    psu.png

  19. Do you have a guide on configuring privoxy (or even how to get to the webadmin page)?

     

    I have it working ok, but think it's blocking certain sites / content from applications using it as a proxy to the vpn.

     

    Thanks

     

    Edit: Ok, I sort of know where I'm looking: config.privoxy.org lets me identify which rules are being triggered.

     

    I've tried adding it to the trusted site site (trust file), using ~thetvdb.com, but it's still blocked.

    I think I need to do something in the user.action file, but a little lost as to what/where!

     

    Edit2: Right, so using the trust file didn't work because you have to enable it - but turns out if you use the trust file, *only* those sites are allowed.

     

    Instead, i edited the config file, and removed the default actions file. Probably not the best idea, but it'll do until I work out how to properly whitelist sites!

  20. 4 minutes ago, Sabot said:

    Man, my head is spinning. For a newbie, what are the recommended plug and play cards that will work straight out of the box? I don't wish to get into flashing or doing advance iT. Just want to plug it into my little TS140 server, run UnRaid and get the best performance possible from equipment. :) 

     

     

     

     

    Don't feel bad Sabot - my head hurts too - still working it all out, and have basically decided to back to drawing board and start again with what I now know!

     

    My understanding is that the reason for IT mode, is that you don't want the raid card itself interfering with the drives (as would normally happen with a raid card). This is because if you ever need/want to connect the drives to a different controller (eg yours stops working), you may end up losing the information on the drives, as your new controller may not be able to read the info left by the original controller.

     

    So (and I may have this wrong!) your options are...

    - Create single Raid-0 arrays for each disk, and live with the risk of the controller failing (I have this setup on my HP Microserver - though not running unRaid - and is rock solid stable)

    - Get a card which natively supports JBOD - I'm not aware of any that are recommended. My Adaptec 51245 supports this - but isn't a recommended card - and gets damn hot!

    - Get a standard recommended card, and flash it to IT mode. As it happens, I found an eBay seller who offered to do this for me, so that might be worth looking into.

  21. Managed to get a couple of H200's for about £100 for the pair - didn't want to chance it with the Chinese Imports, plus the seller will flash them to P20 IT mode for me.

     

    Also having major second thoughts on this current 2u Chenbro server - it's huge, with nowhere to put it (without buying a rack), and the noise is deafening at times.

    Now I know I like using unRaid, and as I've already invested in the drives, I might go whole-hog and buy a Lian-Li 20 bay monster (D8000B). Not cheap, but be easier to keep about the house, and gives much better cooling (and therefore noise!) options.

×
×
  • Create New...