dgirard

Members
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

0 Neutral

About dgirard

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed
  1. I'm seeing this now on a brand new (to unraid) system running 6.8.3. It's a trial key with only 2 drives and no additional setup--no vm's, no docker setup, not even any plugins installed or any data loaded/shares. So likely this is more basic than suggested above. Note, I don't have this issue on my primary production system, so maybe it's cpu generation based or some other hardware interaction. FWIW, old system is 2x AMD 2431 on a Supermicro H8dm8-2, new system is 1x AMD 6274 and Supermicro H8DG6.
  2. Update: Appears to be related to the Floppy Drive that's detected (even though I don't have one). I updated ScanControllers to skip it and it gets through scanning. David
  3. Hello! I'm having a problem similar to interwebtech. The web interface never gets past "scanning hard drives" When I look at the docker log (icon on the right in Unraid), I see several Java errors, here's the first one: lucee.runtime.exp.ApplicationException: Error invoking external process at lucee.runtime.tag.Execute.doEndTag(Execute.java:258) at scancontrollers_cfm$cf.call_000046(/ScanControllers.cfm:456) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:455) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.Pag
  4. Just add Hard drives, and an unraid license/flash drive. Asking $250. I can meet up anywhere in the Metro Detroit area (within 50 miles of zip 48111), or anywhere along the i75 corridor between Detroit and Cincinnati (I make that drive at least once a month for work) This is a used setup. If I recall correctly, one of the drive bays didn't work. I think it was a backplane or cabling problem. It's obvious in that it just doesn't work...so consider this to be a 19 drive system or take some time and try to fix what's wrong (probably something simple, but I
  5. Hello all. I had previously setup my VM's via the Gui and had edited the XML to set custom port numbers for VNC (the objective is to have consistent port numbers for specific VM's instead of them being assigned in the order that the VM's are started). I also need to have a password on some of the VM's. This was working fine until the last unraid update... Now it seems that I can set a password in the GUI, but still need to edit the XML to add the custom port number for VNC. Unfortunately, as soon as I edit the XML for VNC, it "forgets" the password...so I go back and set the pa
  6. It's the BadCRC and ICRC error flags that specifically indicate corrupted packets, usually from a bad SATA cable. Since you have repeated ICRC error flags, which cause the pauses and resets, and cause the SATA link speed to be slowed down to hopefully improve communications integrity, I suspect you also have an increased UDMA_CRC_Error_Count on the SMART report for that drive. I know you said you replaced the SATA cable, but it doesn't look like a good cable from here. There's still a small chance that it may be a bad power situation instead. Rob: My UDMA_CRC_Error_Count is 2,so
  7. Rob: My UDMA_CRC_Error_Count is 2,so it does not seem to be CRC errors from the drives perspective. I'm still happy to try another SATA cable if that still makes sense now that the errors seem to have stopped (I'm going to monitor for a couple days before confirming) with the NCQ setting change. Also, it's possible that rjscotts problem *is* a cable or power, since when I look back at my log, I see a different message before the failed command...I see: May 9 23:14:43 Tower kernel: ata16.00: cmd 61/00:50:e0:4f:b1/38:00:03:00:00/40 tag 10 ncq 7340032 out May 9 23:14:43 Tower k
  8. One more interesting observation: I have "force NCQ disabled=yes" on the disk configuration screen. Yet it appears (maybe I'm looking the wrong way?) that NCQ is still enabled for all my drives, including this cache drive that's having the problems. If I cat /sys/block/sdc/device/queue_depth it reports a value of 31, which indicates NCQ is in play if I understand this correctly (I believe it should report 0 or 1 if NCQ is disabled?) now, if I change the queue_depth to 1 with echo 1 >/sys/block/sdc/device/queue_depth it appears that my errors with this ssd no longer o
  9. Interesting article. Sounds like I need to pull out the Samsung SSD or be faced with performance problems at some point. I do not think this is the cause of our current problem (rjscott and I) as I reformatted my SSD and re-copied all the data to it and the errors continued immediately. I'm also not certain it's sata cable or power (I'm not ruling it out however)...I did replace the sata cable and even changed the sata port that it was connected to. Power seems stable (it's in a super micro 24 drive server with the dual power supplies) and I have no other power problems. In additio
  10. binhex: Looks like similar problem exists with delugevpn... 2015-05-06 06:04:34,620 DEBG 'setip' stderr output: /home/nobody/setip.sh: line 4: netstat: command not found and 2015-05-06 06:04:34,730 DEBG 'setip' stderr output: /home/nobody/setip.sh: line 4: netstat: command not found 2015-05-06 06:04:34,733 DEBG 'webui' stderr output: /home/nobody/webui.sh: line 4: netstat: command not found 2015-05-06 06:04:34,734 DEBG 'setport' stderr output: /home/nobody/setport.sh: line 4: netstat: command not found Are we doing something wrong? or did they change the upstre
  11. OK, here's the smart report. Looks OK to me, but maybe I'm missing something?
  12. I'll start by apologizing for changing the subject...I didn't realize I was changing the entire thread. Other boards create a sub-subject within a thread if the subject is edited. Strange that I can even edit it. Thanks Mods for fixing it...I meant no harm... I was hoping I'd distilled the problem down to these errors. This is even more confusing now...if IOMMU is an Intel VT-d error then I must have some real problems, since I have AMD CPU's, and wasn't running any virtualization at the time these occurred. I also only see these errors when writing to my SSD (one out of 19 drives
  13. Hello: After upgrading to beta16, (possibly before, but I didn't notice these errors--I think they're new with beta16)... I'm getting a bunch of errors talking to my cache drive (it's an SSD). The errors in my log look like this: At first I thought maybe firmware on the SSD, so I upgraded that, and while I was in there I replaced the SATA cable to the SSD. I also saw some strange storage behavior, with error messages on some folders when viewed via share0 indicating "wrong exec format"...I suspect those are a side effect of this problem. Since this was my cache drive
  14. OpenVPN! OK, I can confirm that OpenVPN works out-of-the-box with this Arch os image. (ok, not out-of-the-box,but without anything special other than pacman and configuration…) I don't think there's a need to add it to the unraid repository, since the one in the Arch repositories works just fine. all I had to do was: pacman -S openvpn and it installed the package and dropped the sample configuration files. I'm using it as a client to connect to "Private Internet Access" (that's the company that provides my anon-internet access service) and I just followed their g
  15. Is anyone else having trouble with NFS mounts from the ArchVM to unraid?...I'm having a problem where one folder on a share (seems to be my sabtemp folder) becomes inaccessible...it shows ownership and permissions as "? ? ? ? ? ? ? ? ? ? ? ?". It seems to resolve itself after some time (hours?)...but in the mean time sabnzbd returns all kinds of errors and basically either looses the dl, or is uanble to run the sickbead post-process script...leaving me to clean up... It's happening on a regular basis now...daily...Not sure if this is the stale nfs file issue that's been seen in the p