TheMantis

Members
  • Posts

    47
  • Joined

  • Last visited

Posts posted by TheMantis

  1. Quote

    I'm a totally new user (within the 30 day trial version) and wanted to buy the old lifelong license today. As written in the email, I should have time until the 27th. Now the changes are already live for me - so I guess I won't be an user ever

    The changes seemed to occur at 5AM New Zealand DLST today (28th of March), at least the website maintenance period began then A quick check says that is well into the 27th of March (20 hours time difference so 9AM?) in California, USA where Lime Tech is based.

  2. To be honest, based on the level of self entitlement shown by many posters regarding the licence changes, I don't know why Lime Tech should even bother trying to make you happy. 

     

    From my perspective, Lime Tech have bent over backwards in an attempt to be fair and reasonable to existing and new licence holders. Do people think that software development is free? The entire world is in the midst of massive inflation, the cost of developing and maintaining software is not immune to this. 

     

    Compared to the cost of hardware, the increases to licence costs are tiny.

     

    Some people need to grow up and get with the real world.

    • Like 3
    • Thanks 1
    • Upvote 1
  3. I have just upgraded from the previous stable version and now there is no access the GUI, unraid connect, Dockers, or via SSH/telnet. The physical monitor on the server indicates a successful boot and I'm receiving email notifications of things that normally occur on boot. 

     

    In unraid connect the mange server help says this: These URLs are currently inaccessible from your location. This means they may be offline, not using a myunraid.net certificate, or are inaccessible from your current location. These URLs may still allow access to the server.

     

    Any ideas before I roll back to the previous stable version?

  4. The black brick thing is a RAM battery backup (well actually it's some capacitors) for when the adapter is used as a hardware RAID card. It just plugs in and is not needed with how the adapter is used in unRAID. There aren't a great deal of 16 port adapters out there and, yes, the LSI ones are very expensive.

  5. Sure you can. It just depends on things like:

    • How many x8 (or x16) PCIE slots do you have
    • How many PCIE lanes have you got available and how do you intend use them

    I have one of these 16 port adapters (not LSI) and it works with zero issues: https://www.ebay.com/itm/ASR-71605-Adaptec-2274400-R-16-Port-SAS-SATA-6Gbps-1GB-PCI-E-RAID-Controller/182739023209?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

     

    And, one of these: https://www.ebay.com/itm/New-in-Sealed-LSI-SAS-9207-8i-6Gb-s-PCI-E-3-0-Adapter-LSI00301-US-SameDayShip/122171565228?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

     

    Both are PCIE 3.0 x8 but using the 16 port adapter I have more PCIE lanes left for other things, such as a pass-through video card.

     

    If you have the funds and a suitable adapter is available, I'd go for a 16 port adapter - you never know how many drives you're going to end up installing.

  6. 3 minutes ago, jonp said:

    Oh and its worth noting that fully updating the documentation is on the to do list. 🙂

    Sent from my Nexus 6 using Tapatalk
     

    Great, I'd imagine that with the constant addition of amazing features, keeping the documentation up to date would be quite an effort. 

  7. 9 minutes ago, jonp said:

    If it's not obvious enough, the technical info behind Unraid is actually documented in the wiki (documentation). I think there are actually multiple links to it on the site for more info.

    Sent from my Nexus 6 using Tapatalk
     

    I understand the links to the wiki are there as you mention. However, for the uninitiated I wouldn't expect to have to dig through often out of date or incomplete documentation just to find out the core features of a product. From my perspective, documentation tells me how to use a product, the product website should sell me the features first. 

     

    While the old site was less pretty it was quite comprehensive in explaining in relatively broad detail exactly what the product did.

  8. Is it just me or has the UNRAID website been dumbed down to the point where virtually no detailed information is given about the product? Sure the website looks great but if I was wanting to get some details of how it works I'd be leaving disappointed.

  9. Diagnostics as requested

     

    Hi all,

     

    Sorry that this app has not received any love over the past year. I submitted an update yesterday to fix a few network connection issues with later OS's, and to ensure it works on macOS Sierra. It also supports multiple cache and parity drives.

     

    Having connections with multiple servers is not high on the priority list due to the limited amount of people requiring this feature. I'd rather instead focus on other features, like SMART integration, external access, and unraid notification integration.

     

    I can't reproduce the bug with changing the ip address of your existing connection, can someone please post some diagnostics for me once version 1.3 comes out?

     

    • In terminal, write command and record results: defaults read nz.co.pixeleyes.Margarita

     

    {

        appPreferences =    {

            diskSpaceWarningThreshold = "95%";

            diskTemperatureWarningThreshold = "45\\U00ba";

            openAtLogin = YES;

            shouldAutoResizeWindow = YES;

        };

        serverList =    {

            "86B558A2-E32A-4626-BD5C-6D2B5239ECBC" =        {

                HWADDR = "00:25:90:37:4E:6E";

                NAME = Tower;

                NETMASK = "255.255.255.0";

                guuid = "86B558A2-E32A-4626-BD5C-6D2B5239ECBC";

                hostname = "192.168.0.159";

                username = root;

            };

        };

    }

     

    • Change IP address in Margarita

     

    Can't. App still has same error:  Error with connection

     

    • In terminal, write command and record results: defaults read nz.co.pixeleyes.Margarita

     

    No change from when run the first time

     

    Cheers

  10. Hi There,

    Seems that I have a problem I'm trying to add a new disk to the array with no luck @ all.

     

    I'm stuck @ "Start will bring the array on-line and start Clearing new data disk(s)."

    I click start and nothing happens.

    I have tried to format the disk using mkfs.xfs /dev/sdX but still no luck the array will not start with the new drive.

    Any idea ?

    I'm on RC2.

    Kind Regards

    Dawid

     

    It's a bug. Should be fixed in the next release apparently.

  11. I'm trying to add a new(ish) disk to the array but after I've assigned it to an empty slot (disk 2 slot) and started the array it just loops and goes straight back to "click start to bring array online and pre-clear disk, etc" without the array actually starting. The disk hasn't been used on unRAID before and is freshly cleared in OS X.

     

    I recall an issue like this quite some time ago (perhaps V5 era). Diagnostics attached.

     

    Cheers

    tower-diagnostics-20160716-1035.zip

  12. I have a disk that has been ejected from the array and is now showing as an unassigned, unformatted disk.

     

    After starting the array in maintenance mode I've run an Fsck check with the following outcome:

     

    Will read-only check consistency of the filesystem on /dev/sdq1

    Will put log info to 'stdout'

     

    Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes

    ###########

    reiserfsck --check started at Mon May 23 12:15:51 2016

    ###########

    Replaying journal: Done.

    Reiserfs journal '/dev/sdq1' in blocks [18..8211]: 0 transactions replayed

    Checking internal tree..  finished

    Comparing bitmaps..finished

    Checking Semantic tree:

    finished

    No corruptions found

    There are on the filesystem:

            Leaves 740460

            Internal nodes 4405

            Directories 169

            Other files 3080

            Data block pointers 748915833 (514457 of them are zero)

            Safe links 0

     

    Following that I ran fdisk -lu /dev/sdq and got the following output:

     

    WARNING: GPT (GUID Partition Table) detected on '/dev/sdq'! The util fdisk doesn't support GPT. Use GNU Parted.

     

     

    Disk /dev/sdq: 4000.8 GB, 4000787030016 bytes

    256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 4096 bytes

    I/O size (minimum/optimal): 4096 bytes / 4096 bytes

    Disk identifier: 0x00000000

     

      Device Boot      Start        End      Blocks  Id  System

    /dev/sdq1              1  4294967295  2147483647+  ee  GPT

    Partition 1 does not start on physical sector boundary.

     

    From my extremely limited knowledge it looks like the data on this disk is good but something has happened to the formatting causing unRAID to think the disk is unformatted.

     

    Any help is appreciated.

     

    Diagnostic file is attached

    tower-diagnostics-20160523-1609.zip

  13. In my experience unRAID performance on OS X is one of the few weak links with the product. I don't know where the problem is but I frequently experience dropped shares (only a Mac reboot will reconnect) and long, long directory loading times. Neither of these things occur when using Windows machines. Using AFP changed nothing nor does the cache dir plugin improve things. In the seven years I've been using unRAID it's always been this way with no real noticeable changes.

     

    I just live with it but it can get frustrating at times.

  14. The only Docker I run is Plex, no VM's (unfortunately enabling VM features causes unRAID to randomly crash).

     

    Plex runs daily library scans and scheduled tasks for backup and optimization purposes, these can interfere with your parity check.

     

    The schedule I have set for those tasks should exclude them from being a significant factor. During the parity check I was monitoring the storage throughput window on System Stats on it was rare to see reads go above 800MB/sec. Normally I would see reads well above 1GB/sec and generally around 1.25 - 1.5GB/sec. CPU usage was reasonable at around 20%. Ultimately it's not a problem, just unusual.

  15. Parity checks are glacially slow for me too. My 68TB server just took 30 hours @ 55.4MB/sec. Normally it would take around 19-20 hours @ about 80MB/sec. There was no activity on the server during the parity check. CPU is an X3470, HBA's are 2 x SASLP-MV8 and 1 x SAS2LP-MV8, motherboard a Supermicro - X8SIL.

     

    The only Docker I run is Plex, no VM's (unfortunately enabling VM features causes unRAID to randomly crash).

  16. I've recently changed the IP address on my server and need to change the settings in Margarita to reflect this. I've tried to enter the new IP address on the "Edit Server" page but the application will not accept the new address once the page is closed. All that happens is an "Error with connection" fault and the new IP address has been overwritten by the old one. I'm also using the iOS version of Margarita and have the same issue on iPad but iPhone works correctly. Any suggestions on what is going wrong?

  17. After upgrade to beta 14 from beta 12 I'm getting a missing disk after every reboot. The same thing happened with beta 13. The syslog seems (it's all Swahili to me) to indicate that when booting the max disk limit is getting reached before the last disk is mounted. If I move the original missing disk to a different SATA port then that disk will be found and a different disk will be missing (disks detected in different sequence/order?). Changing back to beta 12 brings the missing disk back immediately.

     

    Syslog attached.

     

    Hmm, this is an odd one.  I found this line in your logs:

     

    Feb 22 06:38:19 Tower emhttp: too many devices to add ../../devices/pci0000:00/0000:00:1c.0/0000:03:00.0/host7/port-7:4/end_device-7:4/target7:0:4/7:0:4:0/block/sdz
    

     

    And you have 25 total disks in your system, right?  Just wanted to confirm.  Will need to have Tom investigate why this error is occurring.

     

    Yes 25 drives total. Parity + 23 data + cache. That line is about the only thing that stands out for me.

    26 drives.....Parity + 23 data + cache + USB thumbdrive

     

    Quite true. 26 drives but only 25 disks.

  18. After upgrade to beta 14 from beta 12 I'm getting a missing disk after every reboot. The same thing happened with beta 13. The syslog seems (it's all Swahili to me) to indicate that when booting the max disk limit is getting reached before the last disk is mounted. If I move the original missing disk to a different SATA port then that disk will be found and a different disk will be missing (disks detected in different sequence/order?). Changing back to beta 12 brings the missing disk back immediately.

     

    Syslog attached.

     

    Hmm, this is an odd one.  I found this line in your logs:

     

    Feb 22 06:38:19 Tower emhttp: too many devices to add ../../devices/pci0000:00/0000:00:1c.0/0000:03:00.0/host7/port-7:4/end_device-7:4/target7:0:4/7:0:4:0/block/sdz
    

     

    And you have 25 total disks in your system, right?  Just wanted to confirm.  Will need to have Tom investigate why this error is occurring.

     

    Yes 25 drives total. Parity + 23 data + cache. That line is about the only thing that stands out for me.

  19. After upgrade to beta 14 from beta 12 I'm getting a missing disk after every reboot. The same thing happened with beta 13. The syslog seems (it's all Swahili to me) to indicate that when booting the max disk limit is getting reached before the last disk is mounted. If I move the original missing disk to a different SATA port then that disk will be found and a different disk will be missing (disks detected in different sequence/order?). Changing back to beta 12 brings the missing disk back immediately.

     

    Syslog attached.

    How many drives do you have (and what are they being used for)?

     

    I wonder if there is a bug around the maximum number of drives supported, or alternatively beta 12 was not enforcing a limit and beta 14 is?

     

    25 drives total. Parity + 23 data + cache.

  20. After upgrade to beta 14 from beta 12 I'm getting a missing disk after every reboot. The same thing happened with beta 13. The syslog seems (it's all Swahili to me) to indicate that when booting the max disk limit is getting reached before the last disk is mounted. If I move the original missing disk to a different SATA port then that disk will be found and a different disk will be missing (disks detected in different sequence/order?). Changing back to beta 12 brings the missing disk back immediately.

     

    Syslog attached.

    syslog_beta_14.zip

  21. My drives don't seem to be staying spun down.  I do see unRaid log of them spinning down but something is spinning them back up. I can force them down and they do spin down, but after an amount of time they come back up.  I've not had this problem in the past. Only thing I've changed since switching to 12 is adding Dynamix System Temperature and install Cach_dir Dynamix Plugin vs just running it from my go script.

     

     

    For instance, here is my log up until I updated my APC plugin just a few minutes ago.

     

    Dec  5 05:32:02 NAS1 logger: mover finished

     

    Dec  5 06:00:38 NAS1 kernel: mdcmd (93): spindown 2

    Dec  5 06:02:19 NAS1 kernel: mdcmd (94): spindown 0

    Dec  5 06:02:20 NAS1 kernel: mdcmd (95): spindown 3

    Dec  5 06:03:51 NAS1 emhttp: shcmd (22813): /usr/sbin/hdparm -y /dev/sdh &> /dev/null

    Dec  5 08:58:08 NAS1 kernel: mdcmd (96): spindown 4

     

    This would lead me to believe they were spun down, however they are not, they were all spinning.

     

     

    On SNAP drive

    • 1 KVM windows VM
    • 1 Plex Docker

    Plugins

    • APC UPS
    • Dynamix Cache Directories
    • SNAP
    • Web Virtual Manager
    • Libvirt Support
    • Powerdown Package
    • Dynamix System Temperature
    • Dynamix webgui

    I just spun the drives down manually at 2:06pm  and now at 2:16 all but drive 4 are spinning.  I'm the only one here and wasn't accessing any files.

     

     

    Dec  5 14:06:07 NAS1 kernel: mdcmd (97): spindown 0

    Dec  5 14:06:08 NAS1 kernel: mdcmd (98): spindown 1

    Dec  5 14:06:08 NAS1 kernel: mdcmd (99): spindown 2

    Dec  5 14:06:09 NAS1 kernel: mdcmd (100): spindown 3

    Dec  5 14:06:09 NAS1 kernel: mdcmd (101): spindown 4

    Dec  5 14:06:10 NAS1 emhttp: shcmd (25688): /usr/sbin/hdparm -y /dev/sdh &> /dev/null

     

    Only drive I would expect to be spinning is drive one where I have some torrents stored.

     

    Is there a way to track down what might be spinning these drives back up?

     

    This is happening for me too. Usually only a handful of drives but always the same ones. I have no plugins or anything else installed. Weird.