apgood

Members
  • Posts

    130
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Hong Kong

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

apgood's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. I give mine its own static IP address using the docker functionality and then reserve the same IP address in dhcp. Doing this I have zero issues with device adoption, etc because it is seen on the network on its own IP address.
  2. I haven't noticed that. Are you sure you don't have any files open? I don't think so but it's a good question. Is it best practice to shutdown any running VM's manually prior to reboot? Unless I know it's doing something (like downloading), I usually let the shutdown script take care of it. Perhaps something was running in the background? I normally shut vm's down manually as I've had the shutdown process hang a few times when I forgot to do that first.
  3. If you have at least one spare hard drive of 4TB or greater then what you might be able to do is assign it as a data drive in unraid and then connect the 2 drives from the Rockstor using the unassigned devices plugin, but this is based on the assumption that Rockstor hasn't been configured to stripe the 2 4TB drives and that they can be read as individual drives. Advantage of using unassigned devices if possible is that you have faster transfer speeds and once the data has been copied off of the first drive you can the assign the drive as the 2nd data drive and then copy from the remaining BTRFS drive onto this 2nd data drive. Then once the remaining on is cleared you can use it as a parity drive. Sent from my ONEPLUS A3003 using Tapatalk
  4. Seems to me it isn't a "new feature" introduced in the RC, but rather a Change in how it is accessed / activated. It should have been listed in the change log at the point that it was first possible to activate it through 'boot/extra.cfg'.
  5. It can be deceptive. The cache drive or docker.img can be full eventhough the stats reports it is fine (think it's an issue with btrfs but not sure). In my case I had the Emby server docker installed which creates many thousands of tiny files. I think this either that the metadata space allocation is used because of all the tiny files even though there is still plenty of data space left or the files are wasting a lot of space on disk because they are smaller than the filesystem block size. In my case docker and vm's would crash even though it was being reported that I still had 80GB of free space. Long story short I deleted the Emby Server docker and the associated appdata folder and haven't had an issue since.
  6. This has been an issue with binhex-delugevpn for a little while now prior to this beta. It certainly affected me recently on Beta 21 but only recently, it did not occur when I first upgraded to Beta 21 so I assumed it was a problem with docker hub rather than Unraid. I've had the same issue with binhex delugevpn docker on both beta21 & 22 so not unraid release specific. Sent from my LG-D802T using Tapatalk
  7. I've seen this error on my system after a nasty motherboard death issue. Anyhow mine was related to memory timing, and I was able to change some settings and have never seen it again. Even if you haven't changed anything hardware related, I'd still run Memtest to be certain. Hi Bungee, Thanks for the reply. I've recently run a 24 hour memtest with no issue but it is certainly possible my memory may have started to play up. For now i have done the following to test what was going on just to rule out a few things before taking the home server away for memtests: Re-flashed the BIOS on my motherboard Moved the PCIe devices to different sockets in case something weird was going on Replaced the power cables and usb cables going to/from my usb 3 controllers Moved my gtx 210 so it is on its own under a single PLX chip Having googled the error people are suggesting it is an nvidia driver issue under linux - most reporting to be the host gpu (weirdly all under the same pcie port and requester id). Having an older gtx 210 it would make sense to be driver support issue due to its age but i don't think this is too likely due to the way it has just started happening I think my next steps are to run a memtest again, and alter my system overclock slightly just in case something is not playing nice. It seems too weird that vm 2 would lose all usb input after the lockup though - i did find a kink in the usb cable and part of it had be crushed so right no i am ruling out the usb cable sending rubbish to the controller When doing the memtest just make sure you have tye SMP (multiprocessor/core) option enabled. I found that once when had issues with vm's and unraid hanging ram would pass a normal memtest, but when I tried using the the multiprocessor option it hung around test number 7 and was fixed by changing out some memory module. Thing is the default Memtest might not be stressing your system enough to reveal the issue. Sent from my LG-D802T using Tapatalk
  8. I tried to trim the ssd I moved to the array. From what i read, it should be "fstrim [options] device-mountpoint" So "fstrim /mnt/disk5" in my case. I got an error, that said something like "discard not supported". Tried the same command on "/mnt/cache" and it worked. It seems, that trim on xfs filesystems (which all of my disks are) only works, if the disk is mounted with the "-o discard" parameter (source) After stopping the array, and mounting the ssd correctly, trim worked. However, it did not help in any way with the issue. I moved the vm from the ssd in the array to another disk (hdd) with the same result. My ssd is format with btrfs and I just used the Dynamix SSD TRIM plugin. This seems to have fixed my issue with file copies hanging and failing.
  9. Not directly related to vm's but I used to have an issue with copying to my cache drive (an ssd) used to start off fast and the slowdown, eventually hang then fail. I initially I thought it was when writing from Windows based pc's and vm's, but then found after further testing it was happening with and sort of device (e.g. usb attached hdd or wired gigabit connection, etc). Writting directly to the array without a cache drive was fine. This issue didn't really manifest unless I copied larger files e.g. larger than 1 or 2 gigs. What I discovered was that after enabling Trim as a scheduled daily task the issue went away and since then it has been fine. Anyway I'm wondering if this could be a similar issue for vm stability in that once some ssd's have had data written to the whole drive then performance can take a real nose dive / cause hangs while the algorithms find / clear space to write to as they don't have automatic garbage collection and need to have trim run on a regular basis to maintain performance. I imagine this would be more prevalent in instances where people tend to fill their ssd to capacity or near capacity on a regular basis. Sent from my LG-D802T using Tapatalk
  10. I'd think twice about building an unraid server with one of these cases as I found drive temps (50 - 60C) to be an issue due to the airflow around the drives not being good. The case is designed to be in a cold air conditioner server room with powerful cooling (& noisy) fans.
  11. Already tested ram all good, network card is built into mobo, but if it was losing connection I would be able to see that in the syslog.. also would be able to see that in my router log. This has happened before.. Working Fine, Samba version XXX Next Version of Unraid, New Samba - Issues Next Version of Unraid, Newer Samba - All good I wish I could see what my media player was doing... If by testing RAM you mean a default Memtest then that might not be enough as it might not be stressing the RAM sufficiently. Do a Memtest with the smp mode enabled. I had a weird issue where movies were freezing (among other things) and the RAM was passing a normal Memtest. I did a SMP Memtest and it got stuck at the point where multiple cpu cores were accessing RAM at the same time. Changed RAM to some sticks that passed a full SMP Memtest and my issues went away.
  12. Try swapping RAM and/or network card in your unraid server as they are other possible culprits.
  13. We us orange and green on our corporate website ( http://www.ceosyd.catholic.edu.au/Pages/Home.aspx) and it is a bit lighter than your second example but not quite as bright as the first one. Can probably find you the color codes if you were interested.
  14. In the router reserve the .0.7 up address for that Mac address and then change the lease time for up addresses for whatever the minimum time length is (is normally in seconds). Then wait for at least that length of time then restart your unraid box or do a release and renew of the ip address from the command line. If you just restart the computers on the network before the ip address lease time has expired then you are like to just have the router assign the same ip address. Sent from my LG-D802T using Tapatalk
  15. Do a Memtest with smp enabled. That way the memory subsystem will be stressed by multithreaded access requests. If it passes a full test cycle of that then it is fine. I've had instances where my server was failing when under load the memory passed the default Memtest and wasn't until I did it with smp enabled that it failed. Changed the RAM and no problems since. The problem RAM is in my htpc and work fine there. Sent from my LG-D802T using Tapatalk