devros

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by devros

  1. had to reboot, UD shows the device with the "MOUNT" option greyed out. If I try and run an fsck, it just searches forever for a superblock. Running out of options here. UD has served me very well over the years, but this one has be stumped.
  2. will there be a 6.7.2 compatible version out soon? Thanks!
  3. Just tried it on an identical 4T drive also hooked up directly to the motherboard and this time it did mount as a 4T mount
  4. This disk is connected directly to a SATA port on the motherboard. The motherboard is a current generation X11 Supermicro
  5. Here is a strange one. I added a 4T drive to my setup, used UD to format it as XFS, gave it a custom mount point, mounted it, and turned on auto mount. After I started using it I noticed that the OS was only seeing it as a 2T drive. I checked the partition table and there was just one partition and it was 4T. UD was showing it was 2T in the GUI. After I rebooted, it did not auto mount and the mount button was greyed out just for that drive(destructive mode is on) I tried manually mounting it: mount -t xfs /dev/sdd1 /mnt/disks/tmp/ mount: /mnt/disks/tmp: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error. I removed the lines about it in the .cfg file and rebooted, same issue. Tried running an xfs_repair, but no superblock was found. Any ideas would be much appreciated.
  6. Having a similar issue since 6.7.0. Connected over ethernet to a Mac mini. 1080p video files that played fine before are stuttering a bit.
  7. Can confirm it's working fine under 6.6.7
  8. update for 6.7.0 soon? Doesn't look like there has been a ZOL update since 0.7.12 in Nov
  9. In the same way that there is a place for SMB Extra settings, it would be nice to have that for NFS. I have a ZFS filesystem that I use for archiving, and a script that runs when the array starts up that adds two lines to the end of /etc/exports and restarts nfsd. The problem is that anytime certain changes are made, /etc/exports gets recreated and nfsd restarted and I lose those shares.
  10. with ZFS you can create ZVOLs, which present themselves as a normal block device /dev/zd0 for example. You can partition, create other filesystems on it, just like you would a hard drive. I created a test one and had UD rescan to see if it would pick it up, which it did not. Since its just a regular block device, would it be possible to have UD recognize it and treat it like a hard drive?
  11. Asked this question in more detail here, but figured this could be something UD might be able to help with: I noticed that UD has some hooks/scripts in it somewhere that can keep UD shared devices in /etc/exports even if files services get restarted(the file gets blown away and rebuilt) so just a user script at array startup that echos those lines in and restarts nfsd won't work. I have a ZFS file system I would like to share and was wondering if I could somehow use UD to share it over NFS, even though it is not mounted by UD itself. Thanks, -dev
  12. I noticed on a munin graph that recently was going on with my USB boot drive. It appears that every few seconds something is writing to /dev/sda ----------- ~# iostat -x -k 1 | grep sda sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 239.00 0.00 45.00 0.00 142.00 6.31 1.58 34.71 0.00 34.71 1.56 7.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 238.00 0.00 46.00 0.00 142.00 6.17 1.54 32.80 0.00 32.80 1.46 6.70 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 239.00 0.00 45.00 0.00 142.00 6.31 1.45 31.51 0.00 31.51 1.89 8.50 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 238.00 0.00 46.00 0.00 142.00 6.17 1.46 31.15 0.00 31.15 1.39 6.40 ------- Something is writing to my boot drive every 6 or so seconds(I confirmed this with Netdata). I did a recursive search on /boot showing the latest modified files and there were no recent ones. A du -sx /boot/* run once, and then run a few hours later show that nothing is growing in size. Any thoughts what could suddenly be causing this?
  13. This is a very minor cosmetic problem, but one that has bugged me since starting to use unraid two years ago that I haven't been able to solve. I have named my server eVault. Usually, after my server boots up, I'll see my server listed in a finder window as eVault, but for some reason, after a while it changes to EVAULT. Just curious if anyone here might have a solution.
  14. I currently have a zpool of a few SSDs with a few ZFS volumes on there. I need to export one of the volumes over NFS. The problem is that, unlike the SMB settings where you can add extra config, there is no such option for NFS. Unassigned devices won't recognize ZFS volumes, so I can't use that. If I use the zfs set sharenfs=on command, it will work then, but will not stay shared after a reboot or if any changes are made to any share setting. My solution was to write a script that runs when the array is 1st started that echos the lines of config I need into /etc/exports and then restarts nfsd. The problem is that whenever some config changes are made, it results in /etc/exports getting blown away, regenerated and a restart of nfsd. Any thoughts on how I could get the two extra lines I need added every time /etc/exports get regenerated? TIA, -dev
  15. I'm not sure if this is still the case, but at one point you had to be a Plex Pass subscriber for that feature
  16. type "cat /dev/sda > /dev/null" and look for which drive lights up. There is also a script here that will work.
  17. That would be great. Did you have any luck?
  18. Thanks, I changed the primary video to iGFX in the BIOS and it works now!
  19. I see you went with the X11SAT-F and the E3-1275v6. I have the same motherboard and the E3-1245v6 chip. Were you ever able to get hardware transcoding to work with plex? I don't seem to have a /dev/dri/renderD128 and I can't find anything obvious in the BIOS to enable it.
  20. Got it working. Didn't know I had to enable MODBUS from the front panel(you can't do it from the management card)
  21. Did anyone ever get USB MODBUS working?
  22. We recently retired a DB box at our data center that was in a Supermicro 70 enclosure with a 90 bay jbod attached. This beast had 156 drives with a raw capacity of about 3/4 of a petabyte. Previously it was just a bunch of zfs raid2z(RAID 6) pools of 4, 6 and 8TB devices as 24 different file systems. I'd like to turn it into one massive storage bucket with a minio docker container running on it so it can act as a big S3/Glacier style object store. Obviously with that many drives, a dual parity system is not enough redundancy. What, if any would be the drawbacks of replacing the HBA cards with MegaRAID cards and doing hardware RAID 6's presenting themselves to unRAID as individual drives(assuming that is supported)? I do realize there are many other ways to do this without using unRAID, but just figured I would check to see if this was an option. thanks, -dev
  23. I'm an idiot. the drives were spun down when I looked, and for some reason it won't let me delete this post.