Jump to content

MortenSchmidt

Members
  • Posts

    309
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by MortenSchmidt

  1. No. Not for beta 15. Also, just a quick update. We found a bug as we were going through final release testing that is holding this up right now. I'll spare all the details, but in short, it has to do with the fact that the btrfs progs for 3.19.1 are not 100% compatible with the Linux 4.0 kernel. We are looking to roll back to the previous kernel, but to do so, we would need to manually apply a patch that is slated for inclusion in the 3.19.5 kernel (which is not yet released). We're looking into all this and testing thoroughly. You guys are joking, right? If you keep changing kernels like we change underwear, there will never be a stable release. Think long and hard about the bugs that do indicate kernel updates are important enough to effectively postpone the whole project to the end of time. Bet you the 4.0 or even 3.19.5 kernel will introduce yet another little gotcha, and you will be back where you started. Get a stable release out already, even if there is going to be a couple of known issues. We don't upgrade the kernel without just cause. Causes you have not disclosed, since you don't have a bug tracker. But is seems a lot of hinges on BTRFS and features of that filesystem that were not stable when you started V6 development, and it seems still are not stable. I know you have promised a cache-pool as a feature, but V6 would be a fine product without that feature. Stabilize it, then take the next step when BTRFS is ready for prime time. It benefits no one for us to post issues with internal releases on a public forum. When a public release and is issued, bugs are tracked in the defect reports forum as they always have been. With respect to btrfs, OpenSUSE and some others have switched to BTRFS as the default for their platforms, which is a pretty good indication of stable code. Pool operations have been a bit daunting here and there, but that's how it goes with development. There is no way we would pull the cache pool feature at this point in development. Plenty of hard work has gone into substantially improving pool operations for 6.0 and there is only one primary bug holding up the release for which there is a known fix on the way. I wonder if this has been the case all the other times the kernel was updated as well.
  2. No. Not for beta 15. Also, just a quick update. We found a bug as we were going through final release testing that is holding this up right now. I'll spare all the details, but in short, it has to do with the fact that the btrfs progs for 3.19.1 are not 100% compatible with the Linux 4.0 kernel. We are looking to roll back to the previous kernel, but to do so, we would need to manually apply a patch that is slated for inclusion in the 3.19.5 kernel (which is not yet released). We're looking into all this and testing thoroughly. You guys are joking, right? If you keep changing kernels like we change underwear, there will never be a stable release. Think long and hard about the bugs that do indicate kernel updates are important enough to effectively postpone the whole project to the end of time. Bet you the 4.0 or even 3.19.5 kernel will introduce yet another little gotcha, and you will be back where you started. Get a stable release out already, even if there is going to be a couple of known issues. We don't upgrade the kernel without just cause. Causes you have not disclosed, since you don't have a bug tracker. But is seems a lot of hinges on BTRFS and features of that filesystem that were not stable when you started V6 development, and it seems still are not stable. I know you have promised a cache-pool as a feature, but V6 would be a fine product without that feature. Stabilize it, then take the next step when BTRFS is ready for prime time.
  3. No. Not for beta 15. Also, just a quick update. We found a bug as we were going through final release testing that is holding this up right now. I'll spare all the details, but in short, it has to do with the fact that the btrfs progs for 3.19.1 are not 100% compatible with the Linux 4.0 kernel. We are looking to roll back to the previous kernel, but to do so, we would need to manually apply a patch that is slated for inclusion in the 3.19.5 kernel (which is not yet released). We're looking into all this and testing thoroughly. You guys are joking, right? If you keep changing kernels like we change underwear, there will never be a stable release. Think long and hard about the bugs that do indicate kernel updates are important enough to effectively postpone the whole project to the end of time. Bet you the 4.0 or even 3.19.5 kernel will introduce yet another little gotcha, and you will be back where you started. Get a stable release out already, even if there is going to be a couple of known issues.
  4. I don't know at all as much as you guys, but the statement above concerns me. If I'm reading it right, you want a more recent version with the free inode tree feature and the checksum feature, but a drive formatted previously would not be read/write compatible with one formatted with these features turned on? Does this mean a very similar process to having to convert a drive from ReiserFS to XFS? Or does it just mean you can go forward (and it will be a straightforward easy process), but you cannot go back, once you've turned those features on? Unraid 6 betas have kept up with linux kernel releases like there's no tomorrow (and like a stable release is never meant to arrive): - Beta 6 Linux 3.15.0 - Beta 7/8 Linux 3.16.0 - Beta 9 Linux 3.16.2 - Beta 10 Linux 3.16.3 - Beta 12 Linux 3.16.5 - Beta 13/14 Linux 3.18.5 See in the 3.rd post above that the format was considered stable since kernel 3.15. Thus, the new format is already supported (and considered stable according to the XFS authors), in unraid versions starting with v6 beta6. Needless to say, the XFS filesystem that comes with these kernels are backward compatible otherwise XFS wouldn't work for us! As long as you don't plan on reverting to a beta prior to beta6, you shouldn't have a problem. The only thing being requested here is the tool that is used for formatting drives (which is not part of kernel), is updated to a recent version that includes the new switch. IMHO it's pretty lame to be updating kernels like we change our underwear, but stick to an ancient version of the format tool.
  5. If the limitation is in the 4 links of 6gbps each, then it would be 150MB/s rather than 120MB/s, wouldn't it?
  6. I'm interested in a similar system - how did it turn out? Also, out of curiosity, is it possible to run a Xen VM with linux/windows desktop at all? I realize there won't be any graphics passthrough, just wondering if we can have something similar to the old "windows XP mode" style VM's for basic desktop apps?
  7. I got done with the remove-drive-without-loosing-parity procedure and would like to update JarDo's detailed procedure. This procedure worked for me with unraid 6.0 beta 12. Like JarDo, I can not guarantee results for others. 1) Start a screen session or work on the physical console of the server, as this may take more than a day 2) With the array started, empty the drive of all contents (move all files to another drive) 3) Stop Samba : "/root/samba stop" 4) Unmount the drive to be removed : "umount /dev/md?" (where ? is Unraid disc number) 4.1) If the command fails issue "lsof /dev/md?" command to see which processes have a hold of the drive. Stop these processes 4.2 If AFP is stubborn, consider : "killall afpd" 4.3) Try "umount /dev/md?" again 5) Restart Samba : "/root/samba start" 6) At this point the drive may should show 0 (zero) Free Space in the WEB GUI. If it does, move on to step 7 6.1) If, instead of showing zero free, it shows an incorrect size you may experience very slow writing speeds 6.2 In this case, clear enough of the partition that no filesystem is recognized and stop/restart the array 6.3) To make the filesystem unrecognizable : "dd if=/dev/zero of=/dev/md? bs=1G count=1" 6.4) Stop the array 6.5) Restart the array 6.6) Confirm the drive is now listed as unformatted (and is therefore not mounted) 7) Write Zero's to the drive with the dd command "dd if=/dev/zero bs=2048k | pv -s 4000000000000 | dd of=/dev/md? bs=2048k" 7.1) The pv pipe acts as a progress indicator 7.2) Replace "4000000000000" with the size of your drive in bytes (note that so-called 4TB drives are 4 trillion bytes, not 4TiB) 7.3) Wait for a very long time until the process is finished 7.4) If writing to the drive is very slow, then cancel and go back to step 6.1 Stop the array 9) Make a screenshot of your drive assignments 10) From the 'Tools' menu choose the 'New Config' option in the 'UnRAID OS' section. 10.1) this is equivalent to issuing an 'initconfig' command in the console 10.2) This will rename super.dat to super.bak, effectively clearing array configuration 10.3) All drives will be unassigned 11) Reassign all drives to their original slots, while refering to your screenshot. Leave the drive to be removed unassigned. 11.1) At this point all drives will be BLUE and Unraid is waiting for you to press the Start Button 11.2) Assuming all is good, check the "Parity is valid" box and press the 'Start' button 12) At this point, provided all is OK, all drives should be GREEN and a parity-check will start. 12.1) Note this is a parity check, not a rebuild. If everything went well, it should find 0 parity errors. 13) If everything does appear to be OK (0 Sync Errors) and you want to remove the drive straight away 13.1) Cancel the parity check 13.2) Stop the array 13.3) Shut down the server 13.4) Remove the drive 13.4) Power up the server 13.5) The array should start on its own 13.6) Maybe a good idea to do complete a parity-check. Notes: - The reason this works is you are operating on the md? device, which is the parity protected partition for the data disk you want to remove. Fill that partition with zeros, and parity will not be affected by it's presence, same way as when you are adding a pre-cleared drive, only in reverse.
  8. Epic. Thanks for posting, and thanks for coming back to update it. I added it to the Wiki lime-technology.com/wiki/index.php/Make_unRAID_Trust_the_Parity_Drive,_Avoid_Rebuilding_Parity_Unnecessarily
  9. You are right, mv should be working, and as it turns out in most cases it does work for me. I might have been mistaken - haven't run into it since. Sorry to cry wolf. Moving with MC works too (unless you are merging files into existing directories). Thank you for your elaborate note. However, while rsync'ing files (converting disks from ReiserFS to XFS), I ran into a problem with rsync - I apparently had some files with invaild extended attributes, and the way rsync handles that is to... not copy the files at all. So count your directory sizes defore deleting anything from old disks!! Here's what I got when trying to re transfer a folder that turned out smaller on the destination disk: root@FileServer:~# rsync -avX /mnt/disk1/Common/* /mnt/disk16/temp/Common/ sending incremental file list rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_1423.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_1433.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_7397.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/MVI_1433.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Jyllingeskole/913_1622_02.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1620_01.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1618_02.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1621_01.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/ekstra/MVI_1423.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) sent 2,017,059 bytes received 6,978 bytes 15,161.33 bytes/sec total size is 118,985,987,854 speedup is 58,786.47 rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0] I can copy those same files with MC no problem. Perhaps your way of doing things would end up in the source disk having everything else deleted and only the problem files left. Idunno. But your method doesn't store the checksum with the files for checking for bitrot later on - except if you run bunker as a separate step. I still think having checksums stored in a file in each directory would be simpler and more robust overall, and it would solve this issue as well.
  10. Hmmm, on my system, even when source and dest is on the same physical disk, the above results in reading and re-writing all the files, instead of simply renaming the top level dirs. Takes a long time and defeats the purpose of not deleting the source files until after the destination files have been verified. Am I missing something?
  11. With the import and check options only a file reference is given, no folder (this information is already in the file to be imported/checked). So syntax simply becomes : bunker -i -b2 -f /mnt/disk4/disk4blake2.txt Thank you. Makes sense. A problem I have noticed with using this tool is if I rename a folder, all files within that folder loose their extended attribute and thus their hash. This is a problem if you want to rsync your files to a new drive in a temp location and then later rename folders after files have been verified (this avoids having duplicate files while the transfer&verification is ongoing). There are a couple of workarounds, one of which is 1) Generate hashes on old (reiserfs) drive (bunker -a /mnt/disk1) 2) Copy to new (XFS) disk (rsync -avX /mnt/disk1 /mnt/disk2/temp 3) Verify files (bunker -v /mnt/disk2/temp 4) Export hashes from temp location (bunker -e -f /mnt/cache/disk2temp.txt /mnt/disk2) 5) Manually edit hash file to replace '/mnt/disk2/temp/' with '/mnt/disk2/' 6) Move files from temp to final location (mv /mnt/disk2/temp/* /mnt/disk2 or something along those lines) 7) Re-import hashes (bunker -i /mnt/cache/disk2temp.txt) However, it is also a problem when you want to reorganize your media library and rename/move many folders around. I'd love to hear of a solution that is more elegant than exporting hashes to one big file, then manually find&replace paths and then manually re-importing. I would also love it if bunker had the capability to store hash files per directory instead of the whole ext-attrib thing. This would elegantly avoid the above problem. It would also make it possible to generate hashes on the server, but from time to time verify a file over the network (with, say the corz tool). Is this a feature request that can be considered? Thanks again!
  12. I've been using Bunker for a while now - thank you gentlemen! But the -i (import) option isn't working for me, I just get an "Invalid Parameter Specified" error. I have tried bunker -i -b2 -f /mnt/disk4/disk4blake2.txt /mnt/disk4 Same story with -c (check). But -a (adding), -v (verify), -u (update) and -e (export) all worked fine. Are the import and check features simply not implemented yet?
  13. Now, I may be wrong, but I glanced over the xfs.org site, and while the last status update posted is from 2013, that status described the metadata checksum as an experimental feature. There do not seem to be anything saying it ever left the experimental stage, so that may be why it's not the default even in the new xfsprogs. Latest status update: http://xfs.org/index.php/XFS_status_update_for_2013
  14. I checked the flash drive, and windows did want to scan it, but found no errors, and there's no lost+found folder. Squid: Will update my sig - but Yes, I went straight from 5.05 to 6b12. Never had the old dockerman. I elected to start over, but even that was not straightforward. I stopped the docker system and chose a larger image size and renamed the old volume 'docker-old'. But here's the strange part, even after I had both stooped docker altogether and renamed the image, it was still mounted in /var/lob/docker (!). This also caused my cachedrive to be busy when I tried to shut down the array (fsuser found LOTS of btrfs references). All datadisks unmounted as they should, but unraid kept trying to unmount the busy cache drive before stopping the array. The webgui stopped responding after I tried to reload it. Perhaps I should submit a defect report for that? /root/mdcmd stop worked, and I was able to do a clean shutdown manually. Starting the docker adventure all over in a new image volume now. Will make a backup of /boot/config/plugins/dockerMan when I'm done..
  15. Thanks. I have only 3 empty subfolders in that dir. How do you think I can best recreate them?
  16. Following a bad crash, I am now unable to update my dockers, and all of them say "NA" in the status column. They start up correctly, and the panel does list all the port and path mappings correctly, and they do work. But when I click edit, I get a blank panel without any info (name, repository, ports, paths etc all blank). I was looking for the settings files on the the boot usb, but not finding any. Are they in the docker volume (that 10GB BTRFS image), and is / can it be mapped somehow so one can have a look at what went wrong? Or should I assume the volume is corrupted and start over???
  17. I find it disturbing that there's a (long) list of features yet to be implemented in 6.0, yet there doesn't seem to be any proper bug tracking or list of known and confirmed issues. Feature freeze this sucker already and focus on getting it stable -> RC -> Release. Look at it this way - the sooner you guys can get 6.0 out the door, the sooner you can get to work on 6.1. The 64-bit kernel, Docker and XFS are huge improvements that users will hugely appreciate - surely the rest of the planned features are nice-to-have and can wait till 6.1. I will highlight but a few: - Particularly the turbo-writes feature worries me - too risky, please leave for the next release. - UPS support and safe powerdown? Please - we've survived this far with needing to add that via plugins or unmenu packages. - Mouseover for SMART status - do you have any concept of what a minor feature that is in relation to 64-bit and docker??
  18. I have had that problem a lot. Seems like ReiserFS locks the whole system (kernel level?) when trying to allocate space for a file on a nearly full drive. Seems I can fill a newly formated reiserfs drive to within a few GB with no issues, but on a drive where files have been moved on and off for a while, I need to leave ~100GB free space to avoid the issue (more on a drive with many small files that has been used a long time). Surprised more people aren't writing about it here. Not so sure the "linux doesn't need defragmenting" saying holds true for reiserfs. I know there isn't a defrag tool, but maybe it would have helped? Did you convert to XFS, and if so, did behavior with full disks improve?
  19. I've got entries like these in my syslog. 7 times this past week. Dec 31 13:51:43 FileServer kernel: UDP: bad checksum. From 186.89.150.42:21296 to 192.168.2.3:1109 ulen 297 (Minor Issues) Jan 1 04:46:21 FileServer kernel: UDP: bad checksum. From 112.238.145.197:1051 to 192.168.2.3:1113 ulen 319 (Minor Issues) Jan 2 04:42:30 FileServer kernel: UDP: bad checksum. From 112.238.145.197:1051 to 192.168.2.3:1114 ulen 319 (Minor Issues) Jan 3 15:52:15 FileServer kernel: UDP: bad checksum. From 5.138.108.134:6881 to 192.168.2.3:1115 ulen 298 (Minor Issues) Jan 5 08:53:17 FileServer kernel: UDP: bad checksum. From 5.138.115.188:6881 to 192.168.2.3:1115 ulen 319 (Minor Issues) Jan 5 12:27:51 FileServer kernel: UDP: bad checksum. From 5.138.102.239:20621 to 192.168.2.3:1115 ulen 310 (Minor Issues) Jan 5 16:44:25 FileServer kernel: UDP: bad checksum. From 5.138.115.188:6881 to 192.168.2.3:51413 ulen 320 (Minor Issues) I have port 51413 forwarded to my server for Transmission BT client, but port 1109 to 1115 are not open. I have confirmed those to be 'stealth' with GRC's shields up. I also have 4 of these this past week: Jan 2 15:07:07 FileServer kernel: python[15327]: segfault at 58 ip 000000000052c8d8 sp 00002ab640800140 error 4 in python2.7[400000+2bd000] (Errors) Jan 4 00:21:52 FileServer kernel: python[818]: segfault at 58 ip 000000000052c8d8 sp 00002b781d546b60 error 4 in python2.7[400000+2bd000] (Errors) Jan 4 03:03:11 FileServer kernel: python[27730]: segfault at 58 ip 000000000052f1cb sp 00002b17b1166920 error 4 in python2.7[400000+2bd000] (Errors) Jan 5 22:58:01 FileServer kernel: python[31691]: segfault at 58 ip 000000000052c8d8 sp 00002af8c0b23220 error 4 in python2.7[400000+2bd000] (Errors) I'm running dockers for sickbeard and couchpotato both of which are python apps - but the above does not give any hint as to which one crashed. They seem to be running fine everytime I access them. I also understand the phusion baseimage uses python for some init tasks. Googling seems to suggest there is an 'udp short packet attack' that can crash apps on a server under attack. Should I be worried? And why am I seeing stuff on port 1115 which isn't exposed to the internet???
  20. Do you happen to know how long the warranty is?
  21. 2gb. Was a good amount 3 years ago, and it is hard to see that changing just because more is cheaply available (to people with ddr3 systems..)
  22. In the case I brought up, where unraid botched up while rebuilding a disk, there was a far better action to take. Reboot with a clean go script (no add-ons), and rebuild the disk again. Had I run a correcting parity check, I would not have had that option.
  23. Yeah, the theory is nice and all, but just a couple of releases back (4.6 and 4.7) there was a substantial bug in unraid that would cause a drive being rebuild to have errors in the very first part of it (the superblock I believe). This occurred for me several times, it is provoked by having addons running and accessing disks (changing the superblock) while the rebuild process starts. See my old topic on this: http://lime-technology.com/forum/index.php?topic=12884.msg122870#msg122870 Now, if you had that happen, then next time you run a correcting parity check, those errors will become permanent corruptions to the drive you had rebuild. I am very greteful to Joe for advising all of us to run NON-CORRECTING monthly parity checks, thanks to this my unraid server maintains a perfect record for never losing or corrupting any data (I was able successfully re-rebuild the disk in question by doing it without my addons running). Sure, the bug was eventually (after far, far , FAR too freaking long) corrected in unraid 5, but I say better safe than sorry. Non-correcting monthly parity checks are safest, and I would STILL like to see an option to automatically perform a non-correcting parity check after upgrading / rebuilding a disk.
  24. The other day, I was sorting some media, and somehow ended up with 5 duplicate files. This resulted in the the logger crashing and restarting less than a day after I had 'organized' those files. Looks to be because cachedirs keeps scanning the duplicated files, and each one is reported to syslog each time cache_dirs runs. Would have liket to attach a syslog, but... :-) Is it likely the cause is as I have described, and is there any way to avoid this (other than avoid creating duplicate files..) ? If not, can we talk about an extension to unrad_notify that would send an email when syslog gets over a certain size (or RAM filesystem is short on space) ?
  25. It hangs because unmenu is waiting for cache_dirs to return some output... which it will not because of the way it runs in the background. The post below your, or above mine as the case were, will work to start cache_dirs. I don't think that's accurate. When cache_dirs starts, it does output a string to the console. Looks like this on mine: cache_dirs process ID 5317 started, To terminate it, type: cache_dirs -q Further, I've now tried to invoke a script of mine that starts cache_dirs with my favorite arguments (that way it will always start with same arguments goth when called from go script and from unmenu). I've added an echo command - script looks like this: cache_dirs_args='-w -s -d 5 -e "Backup" -e "Games" -e "MP3BACKUP"' /boot/custom/cache_dirs/cache_dirs $cache_dirs_args echo "cache_dirs started in background with argsuments" $cache_dirs_args Still the same problem - unmenu hangs when I invoke my script (that I'm positively sure does output text). I can use the AT workarround (Thnx sacretagent), just curious as to why this happens with the way I had done it. Trying to learn here
×
×
  • Create New...