drawz

Members
  • Posts

    193
  • Joined

  • Last visited

Posts posted by drawz

  1. It'll be interesting to see the actual power consumption with a Kill-a-Watt.

     

    Note that 10w is the TDP of the CPU => NOT the board + CPU

     

    ... but I'd still expect a total consumption with drives spun down or under 20 watts.    As a comparison, my D525-based (Atom) system idles right at 20w, with a CPU that's rated at 13w TDP.

    As noted above, the chipset is now integrated in that total 10W TDP. In addition, the latest Bay Trail CPUs support Speedstep, which the old versions did not. There is definitely some potential for reducing the real world power consumption *IF* the rest of the motherboard is designed with power consumption in mind. Even if it isn't, I still bet we see a decent drop.

  2.  

    It backup from my computer to the cloud and to "Tower" but when open a tunnel and login to the GUI I entered my key, and clicked yes to transfer the account, and erase the backup I had there.

    But when starting to backup to the cloud on my unraid, it only say 185 MB files, instead of 2.4 GB of files, it say it backed up to "Tower".. so I think I only send the install files or something to the cloud, and not my backup-files from crashplan :)

    I'm not sure exactly what you are saying here, but it sounds a little bit like you are trying to use crashplan on unraid to backup to the cloud. And the files on unraid that you want to backup are other crashplan backups from other PCs? If that's the case, I think the crashplan engine specifically ignores crashplan backup files. It is designed to backup "normal" (ie non-backup) files on your PC only. At least that's my understanding. Sorry I can't help more.

     

    Hi, yes, I would like to backup all computers in our household (and external computers) to my unraid server, and from there to the CrashPlan cloud.. But if the engine ignore backup-files, I guess it is no point.. and I missunderstood the opportunities of CrashPlan.

    I think they do that so that you can't just buy a one PC unlimited subscription and then backup your whole household through this method. With a family plan, you can backup up to 10 PCs, but that is of course more $.

  3. nicinabox - i'm not sure what it is about boiler and the other tools you have developed, but they seem intimidating for some reason. your ideas are great and i was super excited about all of them when you announced them, including the Web GUI and boiler, but for some reason it feels like there is a barrier to entry that I think most users are not willing to go through. I think if you unified your GUI and package/plugin manager (which should run from the GUI primarily), you'd really make some progress. The lack of plugins availability is an issue too - it's a chicken and egg thing.

     

    Agree that limetech picking and recommending one solution would go a long way. For some reason, Tom seems to want to reinvent the wheel and only allow himself to develop anything official for unraid (see the webgui on github, which is a stripped down version of simplefeatures it seems).

  4. It backup from my computer to the cloud and to "Tower" but when open a tunnel and login to the GUI I entered my key, and clicked yes to transfer the account, and erase the backup I had there.

    But when starting to backup to the cloud on my unraid, it only say 185 MB files, instead of 2.4 GB of files, it say it backed up to "Tower".. so I think I only send the install files or something to the cloud, and not my backup-files from crashplan :)

     

    I'm not sure exactly what you are saying here, but it sounds a little bit like you are trying to use crashplan on unraid to backup to the cloud. And the files on unraid that you want to backup are other crashplan backups from other PCs? If that's the case, I think the crashplan engine specifically ignores crashplan backup files. It is designed to backup "normal" (ie non-backup) files on your PC only. At least that's my understanding. Sorry I can't help more.

  5. Great work! Thank you!

     

    Would it also be possible to adopt unmenu's way of hdd temp checking so that temps (WD drives) will be displayed even when the disks are spun down?

     

    Gee, I have both Dynamix and unMenu installed and I don't see the temperatures of any spun down disks on any of the screens that I looked at on unMenu screens.  What screen are you looking at which shows temperatures of spun down drives?

     

    It only works on WD drives (I think) because they don't spin up when SMART data is requested. Your sig says you have Seagates and Hitachis. That's my best guess for why you don't see it in unmenu.

  6. yeah I reckon that is probably more common than we think it is. Unsurprising really

     

    Whatever webGui is chosen, it all comes down whether emhttp stays alive or not ...

     

    Isn't it about time that all of the developers of webGUI's set the priority level of emhttp (I probably have the wrong terminology here as I am not  a Linux expert) so that it does NOT get killed when resources get tight?  I seem to recall seeing that it only takes a simple command that can be run from a script.

    I think that would be a function of emhttp and the OS and therefore up to Tom, not the webGUI developer.

  7. @WeeboTech - You are probably right...  :P

     

    I'm not sure but maybe the server awakens on power loss and will do a hard shutdown after X amount of time if main power isn't restored by then. I'll have to test this...

     

    Thanks very much for your help!

     

    That's what it's supposed to do. It is usually configurable.

     

    The only other way around this is to disconnect the USB or serial connection.

     

    The issue then becomes, when you come to the point of having to power down the machine because the batteries are close to depletion, there will be no communication.

     

    Not much risk to the server or data on the drives if it's in S3 and power is lost due to UPS depletion. You'll just have an unclean shutdown and possibly lose the state of any apps that are running. Your run time in S3 will be pretty long. Maybe it makes sense to disconnect the USB connection if you can't disable it in the BIOS.

  8. I agree. Just waiting for RAID 5/6 to be stable in BTRFS.

    Same here. I think this will be the best compromise of features. I'm just afraid that it will take forever to be labeled "stable" and/or there won't be a nice webGUI to administer it. Hoping that Open Media Vault takes care of the latter.

    Decided to see if there was any new info on BTRFS RAID 5/6 and came across the following post:

     

    http://www.phoronix.com/scan.php?page=news_item&px=MTU2NDQ

     

    and straight from the developers mouth: http://www.mail-archive.com/[email protected]/msg30103.html

  9. I agree. Just waiting for RAID 5/6 to be stable in BTRFS.

    Same here. I think this will be the best compromise of features. I'm just afraid that it will take forever to be labeled "stable" and/or there won't be a nice webGUI to administer it. Hoping that Open Media Vault takes care of the latter.

  10. Anyone ever got lmsensors working with a HP Microserver?

     

    I've attempted to get it working several times, but always given up without success. Googled lmsensors and followed various instructions, but always without success.

     

    If anyone has got it working i would be most grateful for some pointers.

    Same here. I think you need IPMI support to read the hardware monitoring data beyond CPU temp and I'm not sure if that works with the Dynamix system temp feature.

  11. System Profiler/Info gives me the wrong CPU speed on my HP Microserver N40L. It think it is running at 1500MH instead of 800MHZ (and thinks the max speed is 2200MHz instead of 1500MHz). The info is correct if I telnet in and cat /proc/cpuinfo. This was a problem with simplefeatures as well, but now that there are fresh eyes on it, maybe we can find the issue.

     

    Obviously, this is essentially just cosmetic, although I'm not sure where else it would get the CPU speed from.

     

    systemprof-cpu.PNG.ca7747d0e43e67abd276bf05fd4fb02d.PNG

    proc-cpuinfo.PNG.eea88715b8aa11107a3c269e2583ffdb.PNG

  12. Thanks very helpful. I think the k10temp module is built in and doesn't need to be loaded via the go file. Haven't tried yet though. This should also work with Dynamix.

     

    Any tips on getting hardware sensors working with simplefeatures 1.0.11?

     

    Anyone have temperature-based fan speed control working?

     

    I'd also be interested in getting this working!

     

    The only module available in the stock kernel will give you CPU temp.  To do it;

     

    Put the following in /boot/custom/sensors.conf;

     

    # lm-sensors config
    
    
    chip "k10temp-pci-00c3"
    
    
    
    label temp1 "CPU Temp"
    

     

    Then add the following to the end of your /boot/config/go file;

    cp /boot/custom/sensors.conf /etc/sensors.d
    modprobe k10temp

     

    Reboot and your CPU temp should show.

     

    To get the other sensors we need to get the JC42 module found at http://khali.linux-fr.org/devel/lm-sensors/drivers/jc42/ compiled.

  13. Is this expected behavior in installation?  Just updated to latest 2.07 webgui just before  installing this one.

     

    root@Tower3:/boot/plugins# installplg [b]/boot/config/plugins/[/b]dynamix.plugin.control-2.0.2-noarch-bergware.plg
    installing plugin: dynamix.plugin.control-2.0.2-noarch-bergware
    
    Warning: simplexml_load_file(): I/O warning : failed to load external entity "/boot/config/plugins/dynamix.plugin.control-2.0.2-noarch-bergware.plg" in /usr/local/sbin/installplg on line 13
    xml parse error
    root@Tower3:/boot/plugins#
    
    root@Tower3:[b]/boot/plugins[/b]# ls
    
    dynamix.plugin.control-2.0.2-noarch-bergware.plg*
    dynamix.webGui-2.0.1-noarch-bergware.plg*
    dynamix.webGui-2.0.7-i486-1.txz*
    dynamix.webGui-2.0.7-noarch-bergware.plg*
    
    root@Tower3:/boot/plugins#
    

     

    I think you have the plugin control in the wrong directory. Only the core webgui goes on /boot/plugins. The rest go in /boot/config/plugins.

  14. Thanks bonienl! Dynamix is a great addition to unRAID and, IMO, the best GUI for it yet. The plugin manager with integrated updating & changelogs is fantastic. I was also pleasantly surprised to see preclear status displayed on the preclear already running via screen.

     

    It simply checks the presence of the progress file created by preclear in /tmp and displays its content.

    Ahhh good idea. Took a look in /tmp and see a bunch of other available info. Would be cool if you could pull some more of it in, like the SMART status after each phase.

  15. Thanks bonienl! Dynamix is a great addition to unRAID and, IMO, the best GUI for it yet. The plugin manager with integrated updating & changelogs is fantastic. I was also pleasantly surprised to see preclear status displayed on the preclear already running via screen.

    dynamix-preclear_status.png.84dabde9ffb3ed57af549517f7b122fb.png

  16. Interesting slackware 13.1 has 2 php versions:

     

    1	slackware/n	php-5.2.13-i486-2.txz	3316K	php (HTML-embedded scripting language)
    2	patches/packages	php-5.3.27-i486-1_slack13.1.txz	4196K	php (HTML-embedded scripting language)
    

     

    I suspect for this very reason.

     

    I dont think unRAID ever picks up anything from patches though? I am a bit unclear if we should.

    Can't think of any reason not to update. Or at least try the update! Beyond my abilities to provide any useful testing though.

  17. This article from arstechnica reminds us that security holes are everywhere and this is starting to be exploited on embedded devices, which we could consider unRAID to be in a sense. I haven't bothered to figure out which version of PHP unRAID uses, but I hope that 6.x brings us up to the latest Slackware releases and the updated packages that go with it. There are lots of good reasons to do this, including security and better plugin compatibility.

     

    http://arstechnica.com/security/2013/11/new-linux-worm-targets-routers-cameras-internet-of-things-devices/

  18.  

    I do something with rsync for a lil extra backup.

    I use rsync with the --link-dest="${BACKUPDIR}/${LAST_BACKUP_DATE}" option.

     

    I backup my folders to dated directories. YYYYMMDD, use a script to find the most recent backup folder.

    Then use the --link-dest= option with that dated folder name.

     

    It links the next backup folder (today) to the prior one before doing the rsync.

     

    Then the rsync from the source directories occur unlinking and overwriting the newer changed files.

    This gives you a running directory of a full backup + the changes for that date.

    What's cool is you use 1x the source directory size, then your backup only grows by the incremental changes over the course of the backup period.

     

    You can then delete older directories and still keep your most important dated backups in place.

     

     

    Because du takes the links into account.

    A du down the tree shows the first backup as a full backup and only reports the changed files that are not linked.  This example shows I update almost a 100M a day.

     

    <cut>

     

    See my rsync_linked_backup script via google code page for ideas on how to do this.

     

    This sounds awesome, but I'm not quite confident in my abilities to implement it (and don't have the time right now). Would make a great plugin with a nice webgui! :)