Jump to content

John_M

Members
  • Content Count

    3859
  • Joined

  • Last visited

  • Days Won

    10

John_M last won the day on November 29 2018

John_M had the most liked content!

Community Reputation

247 Very Good

About John_M

  • Rank
    Away for much longer than I expected

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

1589 profile views
  1. That doesn't look right as it has two colons. I'd try eleven.local:/mnt/user/server instead.
  2. If you want to move the shares from the array to the cache you need to set them to "Prefer", not "Yes". If you set them to "Yes" they will be moved from the cache to the array.
  3. To move shares (such as 'system' and 'domain') from the array to the cache you can use the mover as I described here: Yes, it's described in this thread: That's a problem, as itimpi replied. Virtual disks are allocated sparsely, so a 1 TB vdisk will initially occupy only a small amount of physical space, and grow as files are written to it. So it might actually fit on a 500 GB physical disk at the moment. The problems will happen when it outgrows the space available to it.
  4. Unless it has been fixed recently (I'm away at the moment and can't check) this is still a problem:
  5. Turn on Help in the GUI and you'll get a lot of extra information.
  6. I didn't say instability. You should read the posts that preceded and followed the one you quoted from 3 years ago and 60 pages back in order to understand what we were discussing, not quote me out of context and then start arguing.
  7. Because it isn't fully supported by Unraid. You need more than a hot-pluggable drive bay to support hot-swapping so you might have to forego some convenience for reliability.
  8. Maybe the BIOS is corrupt. I'd first try resetting the CMOS to see if that helps. Then redo the BIOS update, if it doesn't. The CPU fan should not stop when you boot an OS! Many consumer BIOSes have user-selectable fan profiles so maybe something got messed up.
  9. The SEDNA one I linked uses an ASM1062 controller and fits in a x1 slot. From the Amazon listing:
  10. https://www.amazon.co.uk/SEDNA-Express-Adapter-profile-included/dp/B01479NJ98
  11. Insert step 1.5 "Disable docker service in Settings -> Docker" and insert step 3.5 "Enable docker service in Settings -> Docker". There are PCIe adapters that take either one or two mSATA SSDs and fit in a x1 slot. IOCrest or SEDNA do them, if I remember correctly.
  12. Do they show up in your diagnostics, or in response to lspci ?
  13. If you post your configuration or your docker run command someone might spot something that isn't quite right.
  14. That depends on the current settings for the Docker vdisk location. If it's currently set to /mnt/user/system/docker/docker.img (the default) then you don't need to change it but if it specifically references disk1 then you'll need to change it to reference disk3 instead. So, for example, /mnt/disk1/system/docker/docker.img would need to change to /mnt/disk3/system/docker/docker/img. I don't have any suggestions for a x4 controller but for a x8 one I'd suggest an LSI SAS controller. Alternatively, you can get a two port SATA controller based on the ASMedia ASM1061 or 1062 that fits in a x1 slot. You could then move a disk or two to the new controller and free up a motherboard port or two for SSDs.
  15. First thing is to test the RAM for 24 hours or so. If the RAM is bad you can't do anything else.