Jump to content

torch2k

Members
  • Posts

    45
  • Joined

  • Last visited

Posts posted by torch2k

  1. I have reformatted the drives and can now recreate the issue on command:

     

    1. Format SSD and SAS drives as EXT4 using UD

    2. Create new Ubuntu VM with both drives attached (using VirtIO)

     

    -- everything works fine, vm works --

     

    3. Reboot Unraid server

    4. Both drives corrupted and VM will not load

     

    Please see attached logs

    Drives are /dev/sdl1 and /dev/sdu2

     

    I would very much like to find out if this is user error on my part, or if I have faulty hardware, or if UD is broken in some way.

     

    tower-diagnostics-20200128-1553.zip

    vm.xml

  2. Trying to salvage this still. sdu2 appears to be mounted and has the entire Ubuntu drive intact. I can see the entire server filesystem.

     

    sdu1 appears to be corrupted and I believe would be the EFI parition. It won't mount (and previous to this boot, didn't show up on UD)

     

    The VM starts and immediately wants to re-installed Ubuntu. Is there any way (i know, off-topic of UD) for me to configure this to run the VM off of some other EFI parition and still access the server?

     

    Screen Shot 2020-01-28 at 12.38.57 PM.png

  3. Just now, johnnie.black said:

    FS on disk1 appears to fixed, now about sdj, I though you were saying the disk dropped offline during this session, but I believe the disk lost the partition, correct? But in that case it happened before rebooting, and Unraid only stores the logs from current boot, so we can't see what happened.

     

    xfs_repair is just for xfs formatted drives, that one uses ext4, you run a check within UD.

     

     

    The system was running fine this morning. Updated Unraid to 6.8.2 and upon reboot this all happened. sdj was also formatted as ext4.

     

    Ran a check with UD on sdu, getting this (attached). Is it currently wiping out my drive?

     

     

    Screen Shot 2020-01-28 at 12.13.39 PM.png

    Screen Shot 2020-01-28 at 12.15.35 PM.png

    Screen Shot 2020-01-28 at 12.13.30 PM.png

    Screen Shot 2020-01-28 at 12.13.10 PM.png

    Screen Shot 2020-01-28 at 12.12.56 PM.png

  4. 34 minutes ago, johnnie.black said:
    
    Jan 28 10:57:43 Tower kernel: XFS (dm-0): Metadata corruption detected at xfs_buf_ioend+0x4c/0x95 [xfs], xfs_inode block 0x748d58e0 xfs_inode_buf_verify
    Jan 28 10:57:43 Tower kernel: XFS (dm-0): Unmount and run xfs_repair

    dm-0 is disk1

    Thank you. I've repaired disk1, restarted the array. Here's a new set of logs.

     

    tower-diagnostics-20200128-1156.zip

  5. I am very concerned right now as after my VM running on a UD disk crashed and wiped out the other day, I completely rebuilt it and after the 6.8.2 upgrade it has crashed again. I'm completely at a loss. This time, I actually mounted a second 2TB SAS drive to the VM to use for backups, and it has gone "missing" according to UD. The VM is doing the same thing it did last time as well - on startup, Ubuntu wants to run a fresh install. Logs attached. Pls help??

    tower-diagnostics-20200128-1100.zip

    Screen Shot 2020-01-28 at 11.03.40 AM.png

  6. Sorry, should have known better. Attached. Here's what else I've found in the meantime:

     

    - Yesterday morning I added 2 new 12TB WD drives. I replaced the 2 10TB WD drives you see listed in my screenshot with the new ones. Parity is currently rebuilding.

    - I pulled out the SSD I am having an issue with and mounted it on macOS. macOS can't read/repair the disk, but it is labeled as "WD easystore 25FB Media" which is totally bizarre.

     

     

    Screen Shot 2020-01-26 at 10.35.50 AM.png

    tower-diagnostics-20200126-1057.zip

  7. I've run into an issue where my Ubuntu Server VM is no longer mounting/loading the correct partition and I have no idea what/how it happened?

     

    Basically it's booting off of the EFI partition and wants to do a clean install. This is what I see on the Main page on Unraid (attached)

     

    Originally it was working when I had the vDisk location set to "/dev/disk/by-id/ata-SPCCSolidStateDisk_AA000000000000001810". Now that I see two paritions on the main screen, I've tried setting this to "/dev/disk/by-id/ata-SPCCSolidStateDisk_AA000000000000001810-part2", but no luck.

     

    How do I get UD to mount the correct parition?

    Screen Shot 2020-01-26 at 10.14.18 AM.png

  8. Is it possible to access my UD share with a different user/pass combo than I would be using for my main Unraid access?

     

    See attached image for a better explanation.

    - I am connecting to Unraid SMB as user psmith

    - I have a UD named staging shared. A Ubuntu VM is installed on the UD and also has a user psmith

    - I can access the UD over SMB, but I only have access to the psmith home folder. This all makes senes.

    - Ideally, I would like to mount the UD share locally as root (or any user other than psmith, for that matter)

     

     

     

     

    Screen Shot 2020-01-18 at 4.19.53 PM.png

  9. I just upgraded to 6.8.0 and upon reboot was not able to access the server - it was not getting an IP address assigned to it. Tried a few reboot and same issue. Then I removed my GTX760 card and things started working.

     

    Updated Unraid NVidia to 6.8.0, re-installed the card, and now having the same issue again where Unraid is not getting an IP.

     

    Am I looking for help in the right place? hah.

  10. Hi, I'm trying to setup my first custom Docker app for unraid and for some reason, my container gives the following message even though I have it configured in Bridge mode:

    Failed to start remote control: start server failed: listen tcp 192.168.1.184:5572: bind: cannot assign requested address

    I was expecting to see something like 172.17.0.2:5572/TCP > 192.168.1.100:5572. Can someone tell me what I've got wrong here?

  11. I'm wondering if anyone could offer some input on my configuration and help me understand what is causing my load spikes. I'd like to know if I'm just asking my server to do too much at once, or if I might have something improperly configured.

     

    Scenario:

    - 2 torrents downloading in deluge

    - A small queue in SABnzbd, which is capped to 45MB/s in config

    - Downloads saved to SSD cache share

    - A Ubunutu web server VM running, not currently being used

    - Server otherwise dormant

     

    Attached screenshots of top command in terminal, dashboard (specs), list of Dockers. Worth noting that serval of the cores are hitting 100% usage, just didn't capture it in my screenshot.

     

     

     

    Screen Shot 2019-10-13 at 9.28.08 AM.png

    Screen Shot 2019-10-13 at 9.27.45 AM.png

    Screen Shot 2019-10-13 at 9.28.20 AM.png

  12. What is the proper way to run both deluge and sabnzbd over vpn?

     

    I currently have binhex-delugevpn up and running perfectly, but trying to also add binhex-sabnzbdvpn generates config errors. Are you only supposed to use one docker with VPN/privoxy enabled, and somehow point the other one at it?

     

    /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint binhex-sabnzbdvpn (14648aaad0ff3ea953484e7db4f8b22e6a8cbf9eee1c1e32e2abf2dece4a9aab): Bind for 0.0.0.0:8118 failed: port is already allocated.

     

  13. Hi all, just installed Unraid for the first time and I'm looking at the daunting task of migrating 20TB of data. I found this plugin right away and it looks great, thank you!

     

    I am trying to synchronize two folders, a local folder (Movies), and a remote SMB mount from my old NAS (using Unassigned Devices plugin). As the sync progresses, there are long pauses every couple of minutes it seems where Krusader becomes unresponsive, and Unraid also seems to be unresponsive (atlhough maybe not all the time that this happens?)

     

    I saw an earlier post in this thread that mentioned enabling "fast writes", but I have no idea what that meant or where to find it.

     

    Currently I am not running a parity drive, and I also do not have any cache drives setup - was going to wait until all of my transfers are completed to do those things.

     

    Any ideas on what the issue is and how I can correct it? I attached a log file from the docker. Thanks!

    Log for_ binhex-krusader.htm

×
×
  • Create New...