Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

22 hours ago, stridemat said:

Anyone aware of a simple script to mount and unmount an unassigned device on a schedule? 

Look at the help on the UD page for commands to mount/unmount a disk.  You could set up User Scripts on a schedule to mount and unmount the disk.  You'd need one scheduled script for mount, and another for unmount.

Link to comment
Just now, dlandon said:

Show a screen shot of the disk on the UD page.

image.thumb.png.86bd0d39ed47ed7642856a3ed47abb81.png

I changed the port and poiwer cycled it seems to show now, i clicked the 'Change UUID' didnt get a prompt at all but it added this to the log 

 

Aug 23 22:50:07 GLaDOS kernel: sd 11:0:0:0: [sdk] Attached SCSI disk
Aug 23 22:50:07 GLaDOS unassigned.devices: Adding disk '/dev/sdk1'...
Aug 23 22:50:07 GLaDOS unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdk1' '/mnt/disks/z1TB'
Aug 23 22:50:07 GLaDOS kernel: XFS (sdk1): Filesystem has duplicate UUID e293ea44-a2ff-4b83-acef-4bf675ed547c - can't mount
Aug 23 22:50:07 GLaDOS unassigned.devices: Mount of '/dev/sdk1' failed: 'mount: /mnt/disks/z1TB: wrong fs type, bad option, bad superblock on /dev/sdk1, missing codepage or helper program, or other error. '
Aug 23 22:50:07 GLaDOS unassigned.devices: Partition 'z1TB' cannot be mounted.
Aug 23 22:50:42 GLaDOS unassigned.devices: Changing disk '/dev/sdk' UUID. Result:

 

Link to comment
1 minute ago, brent3000 said:

I changed the port and poiwer cycled it seems to show now, i clicked the 'Change UUID' didnt get a prompt at all but it added this to the log 

Make sure you have the latest version of UD.  The previous version had an issue with changing the UUID.

 

I wanted a screen shot of the disk on the actual UD page, not the UD settings page.

Link to comment
2 minutes ago, dlandon said:

Make sure you have the latest version of UD.  The previous version had an issue with changing the UUID.

 

I wanted a screen shot of the disk on the actual UD page, not the UD settings page.

Oh sorry didnt know what page you mean? 

 

I just updated it before as m,entioned above when i saw the spin down error,

 

image.thumb.png.17a53496c46c3c8411af7a74fdd59a18.png

 

DId you mean this page?

 

image.thumb.png.d2e71b8f88df9dd4a111934bea075689.png

Link to comment
9 minutes ago, brent3000 said:

I just updated it before as m,entioned above when i saw the spin down error,

This has nothing to do with your issue.  The spin down timer was being applied for older versions of Unraid and didn't apply to 6.10.  Unraid overwrites the timer when it takes over the spin up/down.

 

This is the screen shot I was asking for:

UD.thumb.png.83a6aa95c79beb7efd097ee81a6d774f.png

 

Also post your diagnostics for me to have a look.

Edited by dlandon
Link to comment
2 minutes ago, dlandon said:

Click on the check mark next to the mountpoint and see if the disk can be repaired.  Looks like it may be a disk issue.

Did that get 

Aug 23 23:19:25 GLaDOS kernel: blk_update_request: I/O error, dev sdk, sector 976762864 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0

 

Seeing I/O Error would it be safe to assume hardware issue with the drive :/ :( Or can i try a re-format of it? but guessing if it wont even mount or change the ID its prob not going to allow anything ot be done? 

Link to comment
17 minutes ago, brent3000 said:

Did that get 

Aug 23 23:19:25 GLaDOS kernel: blk_update_request: I/O error, dev sdk, sector 976762864 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0

 

Seeing I/O Error would it be safe to assume hardware issue with the drive :/ :( Or can i try a re-format of it? but guessing if it wont even mount or change the ID its prob not going to allow anything ot be done? 

This issue would be better for @JorgeB to look at.  He is the disk wizard.

Link to comment
1 hour ago, brent3000 said:

Did that get 

Aug 23 23:19:25 GLaDOS kernel: blk_update_request: I/O error, dev sdk, sector 976762864 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0

Disk has a failing NOW SMART attribute so it should have been replaced before, but run an extended SMART test to confirm.

Link to comment

@dlandon 

 

I've got a strange issue with your fairly awesome plugin (thanks for your work ❤️ )

Maybe it's...

  • my fault (in case of everything went wrong, it's always the user)
  • a UX/UI Problem
  • a Bug

My goal was to passthrough my NVME (nvme0on1) for bare metal NVME Power to my WIN10 OS.

The settings are somehow "reverted":

image.thumb.png.832c725e3825a74dc842fd84746a14eb.png

If I unchoose (grey out) the toogles "PASSED THROUGH" and "SHOW PARTITIONS", the NVME is passed through and the partitions are shown.

image.png.77b0766f350b8ae9b2d580636f090279.png

Config of my Win10 OS:

image.png.c24fc83f27be082d6dcce300f37c115b.png

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='8'>
  <name>03 Windows 10</name>
  <uuid>05e3529e-3925-9896-3d72-2b4ad34649f3</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>20971520</memory>
  <currentMemory unit='KiB'>20971520</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>7</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='6'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <vcpupin vcpu='3' cpuset='2'/>
    <vcpupin vcpu='4' cpuset='8'/>
    <vcpupin vcpu='5' cpuset='4'/>
    <vcpupin vcpu='6' cpuset='10'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-5.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/05e3529e-3925-9896-3d72-2b4ad34649f3_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='2D76A8B352F1'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <ioapic driver='kvm'/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='7' threads='1'/>
    <cache mode='passthrough'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/nvme0n1' index='2'/>
      <backingStore/>
      <target dev='hdc' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <alias name='sata0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <alias name='pci.7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <alias name='pci.8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <alias name='pci.9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:11:77:9f'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-8-03 Windows 10/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='3'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='de'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x2d' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/user/isos/vbios/2d-00-0.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x24' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev1'/>
      <rom file='/mnt/user/isos/vbios/24-00-0.rom'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x2d' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev2'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x24' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev3'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x2f' slot='0x00' function='0x4'/>
      </source>
      <alias name='hostdev4'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x145f'/>
        <product id='0x024b'/>
        <address bus='5' device='3'/>
      </source>
      <alias name='hostdev5'/>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1bcf'/>
        <product id='0x08a0'/>
        <address bus='3' device='5'/>
      </source>
      <alias name='hostdev6'/>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

I had to figure this out the hard way. My OS was keep booting into EFI Shell because the NVME was not found in the Bootmenu. 

It cost me two days two figure this out ^^

 

All the best

p0p

 

Link to comment

So, with the latest update I've been unable to mount NFS shares through the webui.  It either times out or fails to establish rpc unless I manually add nolock to the cmd line and mount them manually.  

 

The errors in the syslog are:

Aug 24 16:24:51 Nodens rpc.statd[9996]: Version 2.5.4 starting
Aug 24 16:24:51 Nodens rpc.statd[9996]: Flags: TI-RPC
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, udp): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, tcp): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, udp6): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, tcp6): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: failed to create RPC listeners, exiting

 

or when I don't get those errors, I get this simple one:

 

Aug 24 16:24:45 Nodens unassigned.devices: Error: shell_exec(/sbin/mount -t nfs -o rw,noacl,noatime,nodiratime,hard,timeo=600,retrans=10 '192.168.0.13:/volume1/All' '/mnt/remotes/192.168.0.13_All' 2>&1) took longer than 10s!
Aug 24 16:24:45 Nodens unassigned.devices: NFS mount failed: 'command timed out'.
Aug 24 16:24:45 Nodens unassigned.devices: Mount of '192.168.0.13:/volume1/All' failed: 'command timed out'.

 

 

I can mount just fine over command line if I add nolock, and remove noatime and nodiratime.  I'm connecting to a Synology running DSM 7 from 6.10.0-rc1.  Rolling back to the previous stable version would time out if I didnt remove noatime and nodiratime

Link to comment
11 minutes ago, usunoro said:

So, with the latest update I've been unable to mount NFS shares through the webui.  It either times out or fails to establish rpc unless I manually add nolock to the cmd line and mount them manually.  

 

The errors in the syslog are:

Aug 24 16:24:51 Nodens rpc.statd[9996]: Version 2.5.4 starting
Aug 24 16:24:51 Nodens rpc.statd[9996]: Flags: TI-RPC
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, udp): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, tcp): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, udp6): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: Failed to register (statd, 1, tcp6): svc_reg() err: RPC: Remote system error - Connection refused
Aug 24 16:24:51 Nodens rpc.statd[9996]: failed to create RPC listeners, exiting

 

or when I don't get those errors, I get this simple one:

 

Aug 24 16:24:45 Nodens unassigned.devices: Error: shell_exec(/sbin/mount -t nfs -o rw,noacl,noatime,nodiratime,hard,timeo=600,retrans=10 '192.168.0.13:/volume1/All' '/mnt/remotes/192.168.0.13_All' 2>&1) took longer than 10s!
Aug 24 16:24:45 Nodens unassigned.devices: NFS mount failed: 'command timed out'.
Aug 24 16:24:45 Nodens unassigned.devices: Mount of '192.168.0.13:/volume1/All' failed: 'command timed out'.

 

 

I can mount just fine over command line if I add nolock, and remove noatime and nodiratime.  I'm connecting to a Synology running DSM 7 from 6.10.0-rc1.  Rolling back to the previous stable version would time out if I didnt remove noatime and nodiratime

The noatime and nodiratime were the only changes to the NFS mount.  Looks like they cause a problem on your remote server.

 

Did it mount on the previous version of UD without any problems?

Link to comment
1 minute ago, dlandon said:

The noatime and nodiratime were the only changes to the NFS mount.  Looks like they cause a problem on your remote server.

 

Did it mount on the previous version of UD without any problems?

It did, needing to add nolock only started with 6.10.0-rc1... so that might be a weird bug in RC1.

Link to comment
4 minutes ago, usunoro said:

It did, needing to add nolock only started with 6.10.0-rc1... so that might be a weird bug in RC1.

No, I added the noatime and nodiratime to the latest UD release, so I'm pretty sure that is the issue.  Are you running the latest UD version?

Link to comment
On 8/23/2021 at 11:39 PM, dlandon said:

This issue would be better for @JorgeB to look at.  He is the disk wizard.

 

On 8/24/2021 at 12:27 AM, JorgeB said:

Disk has a failing NOW SMART attribute so it should have been replaced before, but run an extended SMART test to confirm.

 

So i did some more testing on the server, ended up doing a reboot to see if any charge, smart ended up showing till i hit the re-scan then went back to blank,

 

Connected to my Windows PC to see if an issue wiht the unit and it loaded just fine, formatted it, copied a bunch of files to it, so not sure why it would keep not showing, 

 

Would unraid not load a drive once it has specific bad blocks i guess? It is an old drive but iv been using it as a temp data dump/scratch drive for about a yr or so now, so be a shame for it to finally wrap up :( 

Link to comment

@dlandon I've still been following this thread through email notifications about new posts, but I haven't seen mention of a very minor bug I've encountered. I'm running 6.10 RC1 and when you click the refresh icon in UD, all disks are displayed with their partitions shown. Even though all of the disks have the 'Show Partitions' switch set to off. A reload of the tab in the web browser restores it back to no partitions shown for the disks that have it disabled. Again, very minor but I'm sure you know that those of us with OCD can become focused on the smallest of issues. 🤣

Link to comment
1 hour ago, AgentXXL said:

@dlandon I've still been following this thread through email notifications about new posts, but I haven't seen mention of a very minor bug I've encountered. I'm running 6.10 RC1 and when you click the refresh icon in UD, all disks are displayed with their partitions shown. Even though all of the disks have the 'Show Partitions' switch set to off. A reload of the tab in the web browser restores it back to no partitions shown for the disks that have it disabled. Again, very minor but I'm sure you know that those of us with OCD can become focused on the smallest of issues. 🤣

I'm not seeing this.  What browser do you use?

Link to comment

Total newbie to Unraid here...just installed last night.

 

I installed this plugin on the recommendation of Space Invader One's videos and it's helped me understand a few things, but I'm running into an issue I can't quite figure out.

 

I currently have Plex Media Server/Sonarr/Radarr/Tautulli on my Mac Pro connected to some RAID5 arrays in a 24-bay chassis. I want to slowly move away from the separate hardware RAID setups to a single, expandable one on Unraid. 

 

I'm trying to move my current configs for the above services to the Docker containers on the Unraid box, but until I purchase the drives I need to begin the "musical chairs" of the data/drives from the Areca card to the Unraid array...I want to get the docker versions of PMS and all to be able to access the media files still on the Mac Pro.

 

I've added the SMB shares to UD on the Unraid box...and when adding them I can query the list of shares, etc. But when I click the MOUNT button, it errors out:

 

Aug 26 11:47:39 UnRAID unassigned.devices: Mount SMB share '//MACPRO/PrimaryRAID' using SMB default protocol.
Aug 26 11:47:39 UnRAID unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/unassigned.devices/credentials_PrimaryRAID' '//MACPRO/PrimaryRAID' '/mnt/remotes/MACPRO_PrimaryRAID'
Aug 26 11:47:39 UnRAID kernel: CIFS: Attempting to mount //MACPRO/PrimaryRAID
Aug 26 11:47:40 UnRAID kernel: CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Aug 26 11:47:40 UnRAID kernel: CIFS: VFS: \\MACPRO Send error in SessSetup = -13
Aug 26 11:47:40 UnRAID kernel: CIFS: VFS: cifs_mount failed w/return code = -13
Aug 26 11:47:40 UnRAID unassigned.devices: Mount of '//MACPRO/PrimaryRAID' failed: 'mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg) '.

 

Should there be a credentials file created at the /tmp/unassigned.devices/credentials_PrimaryRAID location?

 

All I see there is this:

root@UnRAID:/tmp/unassigned.devices# ls -lah
total 12K
drwxrwx---  4 root root 140 Aug 26 14:03 ./
drwxrwxrwt 12 root root 260 Aug 26 14:02 ../
-rwxrwx---  1 root root 560 Aug 25 21:00 add-smb-extra*
drwxrwx---  2 root root 120 Aug 25 22:59 config/
-rwxrwx---  1 root root 181 Aug 25 21:00 remove-smb-extra*
drwxrwxrwx  2 root root  40 Aug 25 21:00 scripts/
-rw-rw-rw-  1 root root   1 Aug 25 21:00 smb-settings.conf

 

I'm sure I'm missing something here, but still very new to Unraid. Once I get this solved, I have an Ubuntu box with 27 Docker containers I want to eventually migrate over...but one step at a time.

 

Any advice would be appreciated.

 

Thanks

Ross

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.