SnowNova

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SnowNova's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi Ljm42, Since you have included notifications to Discord, are you able to allow webhooks to slack too?
  2. Hi Everyone, Just thought i would let you know that I am running 6.9.0-rc2 and had an issue tonight with an update that shows pending for VM Backup 2020.02.20 - Version Installed 2021.02.03 - Version update available. I cannot update the plugin at the moment. Thanks Shaun
  3. I enabled the plugin method and while it is not perfect i did hack it into the back of unraid so shares work and docker and vm's run off it now. I would ultimately love native support, and further advocate it with some tests below Unraid server is built as per the screen shots, currently parity is turned off which is ultimately going to be slower when performing this test of creating a 20gb file of the NAS itself. ZFS POOL - 2x Hitachi 3TB SATA Drives - 1x Samsung 850 Pro 512GB SSD for ZIL (Logs) Compression set to Lz4 and sync set to disabled. ZFS Mount [zfs] dd if=/dev/zero bs=1MB count=20000 of=20gbte 20000+0 records in 20000+0 records out 20000000000 bytes (20 GB, 19 GiB) copied, 8.42902 s, 2.5 GB/s Unraid XFS Mount with Caching enabled (Raid 10 btrfs) dd if=/dev/zero bs=1MB count=20000 of=20gbte 20000+0 records in 20000+0 records out 20000000000 bytes (20 GB, 19 GiB) copied, 41.0709 s, 487 MB/s I know that real world that over network on 1gbe - this would not be an issue, as the network speed would be slower than the actual time it takes to create the file. On 10gbe this would be an impact. Thought i would share
  4. Oh ok... well that could be an issue. Yes - i saw that ... and i would love to be able to make use of this 256GB of ECC ram for L2ARC and the Intel PCie NVMe for ZIL ... but i do love the unraid product and having GPU pass through is just magic ...so i will just keep that sitting there until one day - fingers crossed @limetech find a way
  5. Surely - the company has matured enough to pay some money for licensing even if they released the ZFS version with an opt in additional fee ... i would buy it.
  6. I know that this was requested a couple of years ago. I wanted to keep this alive as i feel ZFS should be an option regardless of ram requirements or not as long as Limetech documents the requirements for using that file system type. So a +1 from me and open to further discussion, I am also not suggesting it becomes the primary, just a base option included with the release.
  7. +1 for iSCSI Support Just keeping this post alive as I have 15 FreeNAS servers and coral is a disaster, I kept them running FreeNAS 9. unRAID would be my ultimate choice if it had iSCSI support so i can utilise my Infiniband 40Gb Adapters as we use these with iSCSI targets to our hosting environment. I feel this is important enough to weigh in on and to see if it is technically possible and business economical to have implemented into the unRAID software.
  8. Wow ... what a headache. The solution was to boot up a Windows 10 CD - run repair and clean all the drive partitions. I wanted to respond so that others new how to fix this issue should they come into it themselves. The drives were recycled - not new but every single drive would provide that result.
  9. Hi John, I am actually a server hosting company the hardware we had laying around that we were going to use for other purposes. Not bad to have this quality hardware laying around though I have actually replicated the error now on Ubuntu 16.04 - so believe this not to be an unraid only issue now. Last night Ubuntu 14.04 could read the disks fine - so am working through what the issue is as it maybe driver related. The LSI 9207-8i is natively supported in Linux and is one of the most popular used LSI cards on the market besides the previous model. All distributions natively support and have drivers for it - for 2+ years, so assumed that Unraid is no different which is why i reached out for help. Will update this thread to my findings. Thanks Shaun
  10. Oh no definitely not - this is bare metal. I am saying that my UnRaid 6.1.9 can see the disks but wont add them as usable, or turn blue. I started working through the reasons why when i found that for each of the disks connected to the LSI 9207-8I Host Bus Adapter or Pass Through Controller would not actually be able to read the disks and when i try i get the input / output error as shown below Really looking to see if anyone can offer some advice as to why this occurs only in unraid os. Thanks Shaun
  11. Hi Everyone, I am a new purchaser of unraid and recently setup my beefy rig with unraid. My machine specs are below * Norco 4RU 24 Bay Case * Supermicro Motherboard - Model X9DR3/i-F * 2x Intel Xeon E5-2670 Processors * 2x LSI 9207-8i Host Bus Adapters (to pass disks through to unraid) * 16 x 3TB Hitachi Enterprise SAS Drives on LSI Cards - 4x rows of 4 in case with 2 disk rows per controller port ( 2 ports per controller) * 4x 1TB Samsung 850 Pro 1TB Drives on board SATA Controller in AHCI mode * 96GB DDR3 ECC Ram The problem I am having is any drive that is on the LSI cards even being passed through is actually giving me input/output error when attempting to fdisk access the drive. The LSI Cards are running IT Firmware 19 which is the correct firmware to run for these cards. I do not get this same problem if i boot up with ubuntu or centos and try access the disks only unraid. I have attached the log for your review and comment and would appreciate any input you have in helping me nail this one. Thanks Shaun syslog.txt.zip