-
Posts
9457 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by WeeboTech
-
It's already offline, you can pull the drive and check it on another machine. I would do as suggested, try moving it from a usb 3 port to a usb 2 port if that is how it's plugged in. When you do come back online, do a quick rsync of /boot to some folder on your array, you'll be glad you did.
-
Try to enable a swap file.
-
To be fair, October is over. In addition nothing is quick and easy. For a single admin on a couple servers, perhaps. When you are dealing with an OS that goes out to many people of various hardware and custom configurations, core NAS changes have to be really solid. Should these fundamental core NAS features be implemented. Heck yeah. As a paying customer, reach out and let your voice be heard. Respectable communication in Roadmap forum is Key. http://lime-technology.com/forum/index.php?board=63.0 If it's something that is really important to you, I might suggest reaching out directly to limetech. I have a number of times.
-
Knowing Jon, we'll surely see some kool VM/Docker functionality. I can tell you guys that I've been lobbying hard for visible smart data from the webGui. This is and syslog data is crucial to the first level of support when hard drive or potential corruption issues show up. With rsyslog, we can have some form of automated message filters and events, however, given the schedule, I'm not sure about automated alerts yet. I don't have any more information then you think, other then I've been pushing hard to get smart data visible for the common folk and for us to support them. First level is visibility. I have not pushed hard on attribute monitoring... YET! LOL. We may be surprised!
-
SMART errors on SSD cache disk
WeeboTech replied to homejones's topic in General Support (V5 and Older)
Do a power cycle on your server, then smartctl -a on the drive. Post the syslog, let's see if that resets something internally. I used to have an OCZ turbo model that would go offline like that intermittently. In addition, smartd would constantly report sectors going offline, Sure did make me nervous as that was my vmware partition which had an XP instance on it. -
SMART errors on SSD cache disk
WeeboTech replied to homejones's topic in General Support (V5 and Older)
I might look further in the log to see if there were other ATA errors. Maybe there are SATA interface errors. if you can still access the cache drive, I would back it up and/or run the mover. After that I would probably reboot the server. If the drive got into a bad state a power cycle may reset it or you may loose it 100%. Which is why you should back it up if it's accessible. -
This topic has been moved to General Support. [iurl]http://lime-technology.com/forum/index.php?topic=36025.0[/iurl]
-
SMART errors on SSD cache disk
WeeboTech replied to homejones's topic in General Support (V5 and Older)
do smartctl -a on the drive device (/dev/sdj from what I can see below) and post it here. -
is it possible to sync unraid to a synology DS414?
WeeboTech replied to Lacehim's topic in General Support (V5 and Older)
Here is my rsyncd.conf file. I store it on the flash in /boot/local/etc/rsyncd.conf. I rsync it to /etc/rsyncd.conf in the go script. /boot/local/etc/rsyncd.conf uid = root gid = root use chroot = no max connections = 4 timeout = 600 pid file = /var/run/rsyncd.pid log file = /var/log/rsyncd.log socket options = SO_SNDBUF=524288,SO_RCVBUF=524288 [mnt] path = /mnt comment = /mnt files read only = FALSE list = yes [boot] path = /boot comment = /boot files read only = FALSE list = yes [home] path = /home comment = HOME USERDIR Files read only = FALSE list = yes [backup] path = /mnt/disk1/backup comment = /mnt muthu files read only = FALSE list = yes [Video] path = /mnt/user/Video comment = Video Files read only = FALSE list = yes [Music] path = /mnt/user/Music comment = Music Files read only = FALSE list = yes [images] path = /mnt/user/Images comment = Image Files read only = FALSE list = yes Here is my rc.rsyncd script. I put it in /boot/local/etc/rc.d/rc.rsyncd. I call it in the go script. actually, I rsync it to /etc/rc.d, then call it as /etc/rc.d/rc.rsyncd start. /boot/local/etc/rc.d/rc.rsyncd #!/bin/sh # Start/stop/restart rsyncd # Start rsync: rsync_start() { rsync /boot/local/etc/rsyncd.conf /etc/rsyncd.conf if [ -x /usr/sbin/rsync ]; then echo "Starting rsync daemon: /usr/sbin/rsync" /usr/sbin/rsync --daemon return fi if ! grep ^rsync /etc/inetd.conf > /dev/null ; then cat <<-EOF >> /etc/inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon EOF kill -1 $(</var/run/inetd.pid) fi } # Stop rsync: rsync_stop() { killall rsync rsync /etc/rsyncd.conf /boot/local/etc/rsyncd.conf } # Restart rsync: rsync_restart() { rsync_stop sleep 1 rsync_start } case "$1" in 'start') rsync_start ;; 'stop') rsync_stop ;; 'restart') rsync_restart ;; *) echo "usage $0 start|stop|restart" esac here's a snippet of my go script. keep in mind I have other things I do and start in my go script that you will need to comment out. I left them here for the user example. For example anything that needs to remain persistant on shutdown, I always copy from /etc to /boot/local/etc this way it gets rsynced on start up. rsync -a /boot/local/etc /etc while read SCRIPT do [ -z "${SCRIPT%\#*}" ] && continue [ -x "$(SCRIPT)" ] && ${SCRIPT} done <<-EOF /boot/local/etc/rc.d/init.d/rc.installpkg # /boot/local/etc/rc.d/rc.parity start /boot/local/etc/rc.d/rc.rsyncd start /etc/rc.d/rc.sshd start EOF -
is it possible to sync unraid to a synology DS414?
WeeboTech replied to Lacehim's topic in General Support (V5 and Older)
From the looks of it you can go both ways... As long as you do not encompass/overlap the same disks areas. You'll have to be careful with this one since it's manual from the get go. That being said, I can give you an example passwordless way of setting up an rsync server. -
is it possible to sync unraid to a synology DS414?
WeeboTech replied to Lacehim's topic in General Support (V5 and Older)
There's no gui for rsync on unRAID. I can aid with configurations and scripting ideas. Check the synology for a gui, once you give me an idea of what's available it might be easier. It's pretty easy to set up the rsync server on unRAID with a few config files and rsync commands in the /boot/config/go script. The synology can then pull the files if there is a client gui with some kind of scheduler. -
Another dead USB - setting up a new one
WeeboTech replied to galberras's topic in General Support (V5 and Older)
If it were me, I would assign all data drives first without parity. Start the array. (without parity.. note, this will invalidate current parity). Double check that all my data is available on the proper mount points. Then assign parity as the last step and create a new parity. Parity drive would be the last step I configured since that is a write step. Yes this would take some time, but it would also insure that i checked my data first and would not be overwriting a data drive with parity information. However if you've already assigned all your drives and validated your data exists in the mount points expected, then you're probably OK. All depends on how many data drives you have, if you know exactly which drive goes in which spot, then you can pretty much re-assign everything and go. I've had to do this quite a number of times since I've been playing with ESX and a new HP Gen 8 MicroServer. I know exactly which drives go where. Yet I've done this both ways with success. -
Spinning up disks from a desktop shortcut?
WeeboTech replied to GadgetFreak's topic in User Customizations
Joe L has a script that can ping the htpc and when the machine answers, spin up the drive automatically. do a search on the forum for it. -
is it possible to sync unraid to a synology DS414?
WeeboTech replied to Lacehim's topic in General Support (V5 and Older)
unRAID has the rsync client. You can also enable unRAID as an rsync server. If the synology supports rsync as a client or server then you can use which ever machine you want to either pull or push the data. If not, then find out which machine has the ability to create cron jobs the easiest. Which ever one works for you, mount the other machine via SMB or NFS to copy the data. I don't know enough about the synology to help you more then that. I know my readNAS had the ability to schedule rsync client jobs from a webGui. However if it's easier to configure the synology to be the rsync server, a simple command line in a crontable can be set up on the unRAID client. -
Is a sandy bridge CPU a requirement for having pass through work correctly on this motherboard?
-
I'm just about ready to cave in myself, thanks for the update! What CPU did you choose?
-
OMG that picture was so funny. By all rights I shouldn't let this thread go in the direction it is, but I just had to reach out and share the laugh. Eventually we'll have to prune the thread of the tangent comments. Please, don't be upset when the off topic posts start getting pruned. At least it's been fairly humorous and not disparaging as it's been in the past. Guys, Let's leave this thread for the limetech guys to post their news.
-
-
have you tried the card in various modes? Look in the card's bios if you can configure JBOD mode. See if you can set AHCI mode. Sometimes you can configure each drive as it's own RAID0 array. In unRAID 5.0.6 I was pleasantly surprised that my HP B120i raid controller was detected in both RAID JBOD and AHCI mode. Then there is unRAID 6 Beta 10. That may have more advanced HP drivers. Try each one and see if the drives show up. EDIT: from my search it says it doesn't work either, however I would try and see if you can set a JBOD or AHCI mode and use the latest versions to see what happens. if inclined, you can try the HP variant of ESXi. Load that, see if it can see the drives as JBOD drives and load unRAID 5 under it and connect the disks as RDM managed disks. This method is a bit complex and not for the faint of heart. if you are not familiar with ESX, then acquiring a used cost effective SATA controller would be the best bet.
-
The modern larger 7200 RPM hard drives are now capable of 190MB/s to almost 220MB/s on the outer tracks. As the drives get larger and the caches get larger, we need to consider these speeds. From my benchmarks the larger 5900 RPM drives can get about 160MB/s on the out tracks.
-
I'm only interested in the old hash when I fear bitrot. That is when the timestamp of the file is unaltered. I'll probably have up to thousands of file changes on a monthly basis, and would likely never be interested in the old hashes, except when fearing bitrot. If I fear bitrot, I would restore from crashplan or out-of-home backup and check against old hash, I would think What I was planning is check mtime/size if they have not changed, no hash is calculated unless you do a --verify on purpose. Even then the hash is not updated if it differs. The only time the hash is updated is mtime/size has changed or a forced --verify-update is used or possibly a --force switch. If you had to force an update, there is a --delete command to the squpdatedb which will delete the record allowing the next invocation to reinsert the data.
-
The folder.hash / folder.par2 generator would be a separate program. I.E. I have a separate utility that walks down the tree. If the directory is newer then the folder.par2 or folder.hash, both of them are re-created. It will not be attached to the other suite of tools. It's a separate tool. I suppose we could talk about how to set up the ignore function. I.E. When I walk a tree. if there was a .prognameIgnore file then I would skip that directory. I'm open to suggestions on filename. In thinking of this tool, if it's 0 bytes ignore the whole directory. i.e. assume .* If it's > 0, read that as a list of expressions to ignore. i.e. *.o, *~ ,etc, etc. In any case, it's a separate suite of tools, One program to walk a tree. comparing mtimes on folder.hash folder.par2. Another program to read the files in the directory and make the respective files. This can probably be done with find/xargs and a shell. It will work, but all of the external processes add up to allot of overhead. Then you have the issues with filenames and quoting. So doing it in C, I can create a 'safer' array and fork/exec the programs without concern of being interpreted by the shell. So while all of this can be done in bash with tools. I'm paranoid about filenames and quoting enough to do it all in C.
-
FWIW, I have 3 versions of hash storage I've been working with. 1. The SQLite updatedb/locate command. 2. a suite of tools to do this with .gdbm files. .gdbm files are very fast for key/value pairing and it's allot smaller then the sqlite variant. I have been testing this for my own version of the cache_dirs program. The goal was a c based caching initiator, that would also catalog the files into the .gdbm file. When files change, they are updated in the .gdbm and written to stdout which can then be used a seed to update the md5sums immediately or at some scheduled time. Then you can re-import the smaller subset of md5sums back into the .gdbm. 3. a suite of tools manage hash values into extended attributes of a file. it's very similiar to the bitrot shell, only I do it in C for speed. The export creates a file that can be used by md5sum the import reads the md5sum file into the extended atrributes. The hashfattr command works like find, md5sum and setfattr all in one. The delay on it's release has been in using external hash'ers. I've been playing with functionality to allow use of an external hasher like this --hash-exec '/bin/b2sum -a {}' so every file found calls this program, which is piped back into the hash tool and then stored. It's all very similar and easy to do in bash. In my case, the overhead of spawning all these extra helper processes on a million files really makes the process longer. Plus it's an exercise for me in programming. I really like the extended attributes method. If you use rsync -X to move files to another server, it preserves the attributes. Thus you can jump onto another server and verify the hash. However if the file is re-created (as if copied/moved by windows, the extended attribute is lost. Thus the reason to cache this data elsewhere as an exportable md5sums file, .gdbm or sqlite table. I'm making headway, however I'm now in the process of updating some hardware because the HP microserver N40L just takes way too long to test these programs out on large datasets. It takes days on end just to md5deep 1 drive of over 300,000 files. Hence you can see why it's important for me to create functionality to 1. seed the data 2. only do updates on files that actually change with mtime/size. 3. verify files on demand based on some manageable set of rules. Just traversing a drive with 300,000 files takes 30 minutes if the data is not cached. Hence why I started working on my own cache_dirs which tracks what files change by mtime/size. and finally... I'm going to experiment with the newer seagate hybrid drives to see if they have any impact on caching the directory structures in the MLC/SLC cache. I figure at the very least if I continually drop the cache, rescan the drive a few times, it should, in theory, cache the directory LBA's thus lessening the time it takes to walk a directory try. So while it's been quiet here, I've been hard at work.