mbc0

Members
  • Posts

    1118
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by mbc0

  1. I get the same windows network diagnostics details from all windows machines telling me like above that it can see the unraidserver but cannot access it? anyone else got any ideas? Many Thanks
  2. Many Thanks for this, I get this error at the moment (although it worked earlier and told me WORKGROUP was master LANscanner v1.68 - ScottiesTech.Info Scanning LAN... System error 53 has occurred. The network path was not found. Press any key to exit... If I perform Windows diagnostics when manually typing \\UNRAIDSERVER I get the information below... Note I am able to connect to all other machines & network devices without any issues, just the unRAID server is unreachable.. Windows Network Diagnostics Publisher details Issues found The remote device or resource won't accept the connection The remote device or resource won't accept the connection The device or resource (unraidserver) is not set up to accept connections on port "The File and printer sharing (SMB)". Detected Contact your network administrator Completed Issues found Detection details 6 The remote device or resource won't accept the connection Detected The device or resource (unraidserver) is not set up to accept connections on port "The File and printer sharing (SMB)". Contact your network administrator Completed The computer or device you are trying to reach is available, but it doesn’t support what you’re trying to do. This might be a configuration issue or a limitation of the device. Detection details Network Diagnostics Log File Name: 8F9BAF5C-BC80-42CD-AE92-4EC9003ECE28.Diagnose.0.etl Other Networking Configuration and Logs File Name: NetworkConfiguration.cab Collection information Computer Name: I7QUAD Windows Version: 10.0 Architecture: x64 Time: 13 June 2017 22:19:55 Publisher details Windows Network Diagnostics Detects problems with network connectivity. Package Version: 4.0 Publisher: Microsoft Windows
  3. Hi, I upgraded my motherboard & CPU and as far as the unRAID side of things everything is great, the array has started, VM's Dockers etc everything is fine except the server can no longer be seen by any of the 5 machines on the network any more? Current elected master: WORKGROUP is current local master browser (Is this normal? should it say the name of my server instead? I have rebooted (all machines) added the extra lines [global] domain master = yes preferred master = yes os level = 255 but still nothing? I have attached my diagnostics file if someone could lend a hand please? Many Thanks unraidserver-diagnostics-20170613-0825.zip
  4. Thanks for chiming in, there does not seem to be a solution so I am moving DVBLink back to my Gen8 microserver where it worked perfectly...
  5. fingers crossed you will be able to do something with it :-) Thanks for chiming in
  6. Yes, would love the docker to work but that is due to my pixilation issues on heavy CPU usage, I do run just a vm with 4GB assigned purely to run DVBLink & Crashplan so I use the auto login, it can stay running forever without any issues until I start a video encode it pixelates DVBLink (even after the encode has finished) the only way to clear it again is to reboot :-S The docker would probably be the answer but like you I cannot get the WEBUI!
  7. Ah yes, I remember that now! I too have changed the service to run as a user but if you set your VM to login automatically (run netplwiz) everything will be automatic!
  8. No activity on this thread since June, Docker last updated in June and your question is unanswered, looking like this is dead project :-( Just thought I would have a go with this as I am suffering a pixelation issue when cpu is stressed but cannot get the WEBUI either
  9. Hi, Not sure if this is the right place to ask this but I have a DVBLink installation running on a Windows 7 VM hosted by my unRAID server. All works perfectly for about a week and then I start getting disconnection issues resulting in picture glitching/breaking up. I can restart the DVBLink Service or the VM but still have the same problem, the only way I have found to clear the problem is to reboot the whole server (This has happened about 3 times now) Just fishing for ideas in case anyone else has had the same/similar issues? I5 unRAID Server 18GB RAM 42TB Storage VM running from 480GB SSD DVBSKY Dual DVBS-2 PCI-E Card Many Thanks
  10. Please Don't remove them! It would be a deal breaker for me!
  11. Hi, I notice quite a lot that when I want to access certain folders I have to wait until 2, 3, 4 or sometimes 5 drives have spun up individually whilst watching a blue circle spin round I know that when I shut down my array, unRAID is able to spin up all my drives simultaneously why can it not do the same when requested file access? is there a way to do this? Many Thanks
  12. Hi, My USB hub is detected by unraid but I was wondering if it was possible to pass through the whole hub to a VM like you would stub a PCI Device? Each device I connect to the hub show up to be passed through but would be nice if I could get the whole hub. Cheers
  13. Ok, Swapped out the Promise card for a Via Chipset Card and all the errors have gone and system is flying again! Thanks for you help fellas
  14. Hi mate, I disabled the VM to see if that cured the issue after googling found the same results this morning but it had no effect. Cheers
  15. Many Thanks for your reply, I will exchange that controller to see if that resolves the issue, I did indeed delete the previous log file as the UI wouldn't work and the system had ground to a halt with out of memory errors, deleting the log file cured that. I am stuck with the system trying to move 400GB of data which will take weeks at the speed it is going, how can I safely take down the server whilst the mover is running? Thanks Again
  16. Anyone got any ideas? I still have 360GB to shift before the mover finishes which will take weeks at this rate!
  17. Hi, I have an issue where my server is running very slow (mover is running about about 1GB every 20-30 mins) I can see these errors in my log which appear to be relating to a hard disk issue but the addresses do not match up to anything in my device list? Can someone throw me a bone please? Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff0000 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr fffe0000 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff0000 Nov 18 12:05:06 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr fffdc000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff8000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr fffec000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff8000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr fffd8000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff8000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff4000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff8000 Nov 18 12:05:07 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:09 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:09 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff4000 Nov 18 12:05:09 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set Nov 18 12:05:11 UNRAIDSERVER kernel: DMAR: DRHD: handling fault status reg 3 Nov 18 12:05:11 UNRAIDSERVER kernel: DMAR: DMAR:[DMA Read] Request device [04:00.0] fault addr ffff9000 Nov 18 12:05:11 UNRAIDSERVER kernel: DMAR:[fault reason 06] PTE Read access is not set unraidserver-diagnostics-20161118-1204.zip
  18. Just in case anyone is interested, although I would prefer to use a docker, due to so many problems with Black Screens & random failures I now run Crashplan on my Windows 7 VM and it works perfectly backing up all my shares. Please note this is not meant to be derogatory in any way but just to let people know of an alternative that works well.
  19. I think probably self inflicted! My cache drive failed so just to test the sata controller/cables I swapped things around, I have not physically removed the cache drive to test externally yet but parity has now completed, new cache disk in place and all running well :-)
  20. Ended up Cancelling as I was sure nothing was happening and tried the xfs_repair -L /dev/md5 (Last Resort Repair) and everything is back up and running and all data seems intact! :-)
  21. Hi, Thanks for the reply, The Corsair was the cache drive I was talking about, it also just failed a pre-clear "Corsair_Force_3_SSD_11366509000008953378 Pre-read verification failed - Aborted" I ran xfs_repair -v on drive 5 in maintenance mode and have been watching a screen fill up with dots for hours now, it says in the documentation anything from a few minutes to half an hour or more, it is a 2TB drive so will just leave it running overnight to see what happens..
  22. It started with my cache disk reporting full which was on another thread ( https://lime-technology.com/forum/index.php?topic=53765.0 ) this is a separate issue that has been caused by removing the cache drive due to reasons on that thread, it looks like the cache drive is now dead I will be physically removing it to check later. after a reboot disk 5 reported a status of unmountable and that is where I am stuck, I have not formatted, I have just left it in the hope I can recover some/all data off of it. Parity starting running on it's own after a reboot with the message checking/rebuilding so I thought perhaps it was recovering the data but it has not. All drives are XFS Please find attached my diagnostics. Many Thanks unraidserver-diagnostics-20161115-2049.zip
  23. Parity ran for 5 hours but I am still missing a lot of data, for some reason disk 5 changed to unmountable when adding further disks so I obviously upset something in the process. Hopefully the data should still be on the drive but looking for recommendations on recovery of the files? Many Thanks
  24. Yes, It will be the RAM, I remember my first excitement when I built my first Server.....until the RAM filled up!