infidel

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

infidel's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Not sure! But the SSD is barely being used, and it's getting tempting to make it the data drive instead.
  2. You say "for example"... I have an almost empty SSD I'm using as a cache, and a USB2 flash drive as the ZFS data disk that's getting hit 10X as much as the SSD. Would it be possible to partition the SSD and use it as part data drive part cache?
  3. I've come back to this problem after the weekend and it seems to come down to this: According to all the advice I can find (remember, I'm new to Unraid), I've been trying to copy files to /mnt/user/[sharename] For some reason, these map to /mnt/disk1/[sharename], disk1 being the 32Gb USB stick I'm using for a data drive. If you map a network drive to \\unraidserver\sharename\ it maps to /mnt/user/[sharename], so any files you copy fill up the USB drive rather being copied to the pool. When the USB drive is full, all hell breaks loose (see above) I don't know how to change this, if it's something I've done, or a bug that needs reporting. EDIT: Just say the message about setting all shares to cache=only. I don't understand how that would work when I don't have a cache pool at the moment (SSD is in the post). Add it to the pile of things I'm still figuring out in RC2, such as if I have a ZFS pool with (presumably) parity striped across all drives in the pool, does changing the Allocation method do anything anymore?
  4. I would do, but Unraid is imploding. I made a couple of videos to demonstrate what I'm having to deal with here:
  5. No, none of my share names have spaces
  6. I've created a number of shares using zfs create (due to the problem reported above). I've noticed that they're listed under /mnt/pool/ and /mnt/user/ although it's not clear if using one location over another makes any difference. I was able to rsync 10Tb into the various folders, but since yesterday I've been getting these errors: rsync: [receiver] write failed on "/path/file": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(380) [receiver=3.2.7] rsync: [sender] write error: Broken pipe (32) mkdir: cannot create directory ‘/mnt/user/share/test-directory’: No space left on device If I use mkdir /mnt/pool/share/test-directory, it works. If I use mkdir /mnt/user/share/test-directory, it works for some shares and not for others. I've been investigating, and I've found that the shares/datasets that don't give free space errors also appear in /mnt/disk1/ and the ones that don't, don't. None of the mounts appear in fstab, so can't fix it there. I've tried mkdir /mnt/disk1/share && zfs set mountpoint=/mnt/disk1/share disk1/share. Nope. Tried zfs create disk1/share. Nope again. I don't understand what's going on under the hood there. Another problem is that it's clear from accessing these shares remotely, that SMB maps to /mnt/user because I also get space errors there, making the shares useless. I don't know if it's related to how I created the datasets or not. BTW, there's plenty of free space and no, it's not an inode problem
  7. Shares: Can't currently create a share that creates a dataset. Fails silently without a share in the UI or a dataset in the pool. I don't have a cache pool at the moment so can't test if that makes a difference. It works if I use zfs create pool/dataset at which point I can manage the share in the UI. ZFS: Main page seems to always report an unclean system start using the Reboot button on that page. [Edit] Removed problems caused by having to use a USB data drive in an old and abused datacentre server
  8. OK, that's got me a bit further! The pool is now up and running. After a bit of playing, creating datasets from the Share page seems to fail silently*, but it works if I use zfs create, at which point I can manage it from Shares. Transfer speed is back up to what I'd expect. Thanks for your help! *running RC2 now
  9. Yeah, I had an array but wanted a pool OK, did that. I unassigned all 8 drives from the array, created an 8-slot pool, changed the file type of the first one to raidz2, 1 group of 8 devices. I don't see anything on that page to init the pool. The Main page now shows 8 missing array drives (can't select any fewer than 8 slots), and 8 pools (only the first of which is zfs). If I unassign a drive from the extra pools, the first pool shrinks. The array is stopped with "Invalid configuration" and "Too many wrong and/or missing disks!". No messages about what I can do to fix that. Falls short of intuitive so far! If you're still willing to help, we can move this to Discord if it's easier.
  10. Ah, I thought I HAD created a pool, seeing as "Add pool" apparently creates a cache and not a pool (at least, I don't see any raidz2 options there). Sorry, I've only been using Unraid a couple of days as a potential move from TrueNAS Scale. Native ZFS support is light on documentation being so new, so can you create ZFS pools from the UI or is this CLI only for now?
  11. Just to be clear (cos I'm just about to try it), can I only take one HDD offline at a time? I'm trying to copy data onto a fresh ZFS array and it's taken 12 hours to copy 200Gb, so ~4Mb/s. Copying directly from another NAS that I normally get 100Mb/s read speed from and both servers are LAG bonded. Thinking of trying to disable the 2 x parity drives but that won't work if I can only take down one.
  12. What's the likelihood of this happening again before release? Just wondering how temporary to make this pool if there's a chance of seeing "If you created any zpools using 6.12.0-rc1 please Erase..." down the line.