Jump to content

MortenSchmidt

Members
  • Posts

    309
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by MortenSchmidt

  1. I had the same problem. Guy left a massage on the dockerhub that he had to pull the 2.2.0 release. Not sure he is aware it breaks the :latest tag. A message her would have been nice.
  2. In case anyone is wondering.. I got a longwinded errormessage concluding with "sqlite3.OperationalError: database or disk is full " and it turned out to be the docker image running out of space. 2GB free was apparently not enough. After increasing the max docker image size to 50GB it was able to complete the db upgrade. Edit: Oh, and after upgrading you have to stop, delete (or keep as backup) the V1 database and start, and don't put this off as it does not automatically switch over to the v2 database and will need to sync up the v2 database from the time the upgrade process started (mine took more than 24 hours and I was more than 8000 blocks behind when I switched over to the V2 database)
  3. No worries. My problem was a corruption in the wallet files. The problem first occurred when the SSD had run out of space (while doing a db upgrade attempt). Deleting the following and starting up helped: blockchain_wallet_v2_mainnet_xxxxxxxxxx.sqlite-shm blockchain_wallet_v2_mainnet_xxxxxxxxxx.sqlite-wal I left the main wallet file (.sqlite) and after that the wallet quickly synced up and "chia wallet show" returns the expected output. Now.. to figure out how much space is needed to do the db upgrade, I believe I had around 84GB free before starting the process and yet it just failed on me again.
  4. I am having a bit of wallet trouble. Running chia wallet show returns "No keys loaded. Run 'chia keys generate' or import a key". I am running 0.7.0. Chia keys show does show keys are loaded (a Master, Farmer and Pool publick keys are displayed along with a First wallet address). I am earning payouts on XCHpool as well according to the xchpool explorer tool. Help?
  5. (I think??) I'm running the test stream (ghcr.io/guydavis/machinaris-hddcoin:test) and running "docker exec -it machinaris-hddcoin hddcoin version" returns "1.2.10.dev121". What I did was simply add the :test to the repo in unraid "update Continer" dialog and hit apply, it looked to me like it pulled the new image and while I see you have more elaborate instructions in the wiki, please help me understand if and why all that is needed and whether running those commands will cause all of my running dockers to stop, wipe and re-pull? I've used docker commands a bit but never encountered the docker-compose command nor a need for it. PS. Also. running "docker exec -it machinaris-hddcoin hodl -h" (or without the -h) returns: OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "hodl": executable file not found in $PATH: unknown I do see your note in the changelog for :test stream about v6.9 updating to v1.2.11 but according to hddcoin github, v2.0 is needed for hodl.
  6. Check if database drive ran out of space? You might have a different issue but I had that happen, I don't recall what errors I got but the only way out was a database resync. I saw some say you could export it in sqlite as text, remove first and last lines and reimprt, it's just surprisingly difficult to do with a 30+GB file when you're not an expert on sed, and I guess there's no guarantee it would work even if you could do that. That would be welcomed, but also checking whether the drive that holds the database is running out pf space would be beneficial. Yes, unraid does have low space warnings but it's not very granular, and it'd be nice to have within machinaris. On another note, have you tested HDDcoin v2.0 at all? I'm sort of interested in their HODL program which requires the new version.
  7. Quick note to anyone having the problem the farmer and wallet not starting up after updating the docker which currently runs "1.2.1-dev0" : Try forwarding port 8555 in your router, after I did this everything starts normally again. Chia v1.2.0 release notes says something about an RPC system being added, that uses this port. EDIT: Another quick note : To pool, you need to generate new plots with the -c switch, read tjb_altf4's guide thoroughly or check out the official documentation, it is not enough to upgrade to 1.2 and create new plots. Unless plots are created with -c switch they will be legacy solo-farming plots. But that said, I am currently stuck trying to join a pool, have followed tjb_altf4's guide above to successfully create the nft (well, it shows up with plotnft show, and I now have two wallets so think that has worked), but when I try to join a pool with "chia plotnft join -i 2 -u https://europe.ecochia.io" I get the somewhat puzzeling error message "Incorrect version: 1, should be 1". Anyone know what's up with that? EDIT: Must have been a problem specific to ecochia, No problem with pool.xchpool.org
  8. For the sake of people searching the forum, the card featured in that video is the Lenovo FRU 03X3834. TheArtofServer guy says it is a newer card than the IBM FRU 46M0997 and that the cards he received did not have any of the firmware bugs the other cards used to come with. I had a lot of trouble digging up any mentions of this card, so thank you for linking this video.
  9. Thank you all who have contributed to this mamooth information dump. I have tried to follow the instructions in the Wiki to convert my recently acquired IBM M1015 card to LSI SAS9211-8i but the established process failed with a "No LSI SAS adapters found!" already in step 1. Maybe the information on how to overcome this is already in one of the prior 67 pages of the thread but I did not sit down to read through it all.. I did find the a workable solution here: https://www.truenas.com/community/threads/ibm-serveraid-m1015-and-no-lsi-sas-adapters-found.27445/ I have used this successfully and have taken the liberty of updating the unraid Wiki with this information hoping it might help someone, this is what I have added: Note on converting newer IBM 1015 cards to plain SAS2008 (LSI SAS9211-8i) : If you encounter the "No LSI SAS adapters found!" in step 1 and when launching the LSI SAS2FLSH tool (either DOS or EFI versions) manually, it may be because newer versions of the IBM M1015 firmware prevent the card being recognized by the LSI tool. In this case you will need to: - Obtain a copy of "sbrempty.bin" for example from https://www.mediafire.com/folder/5ix2j4jd9n3fi77,x491f4v3ns5i40p,1vcq9f93os76u3o,yc0fsify6eajly0,xkchwsha0yopqmz/shared - Manually read out the SAS address from the sticker on the back side of the card, as you aren't able to read it out with the sas2flsh.exe tool. It has the format "500605B x-xxxx-xxxx", ignore spaces and dashes and note down the sas address in format "500605Bxxxxxxxxx" - Still read all the instructions and precautions in the guide (have only one controller card in the machine, preferably have the machine on UPS power, etc.) - Execute "MEGAREC -writesbr 0 sbrempty.bin" (This wipes out the vendor ID, after this command SAS2FLSH can see the card but refuses to read out the sas address or erase the card) - Execute "MEGAREC -cleanflash 0" (This erases the card incl sas address) - Reboot the machine. - From here you can folow the guide in the P20 package after step 3 - Run the "5ITP20.BAT" batch file in 5_LSI_P20\ folder, then 6.bat in the root folder that you modified with your sas address beforehand. - For ease of reference and because not much is left of the guide at this point, the actual commands remaining are "SAS2FLSH -o -f 2118it.bin" to flash the P20 IT mode firmware, and "SAS2FLSH -o -sasadd 500605Bxxxxxxxxx" to set the sas address.
  10. I too get this, probably happened 3 or 4 times in total. Today happened on 6.1.9 while building a docker image I'm (trying to) develop. In my case, syslogd is the sender and I noticed my tmpfs (/var/log) is full. Next time you guys get this, check df -h Look for: Filesystem Size Used Avail Use% Mounted on tmpfs 128M 128M 0 100% /var/log In my case /var/log/docker.log.1 was about 127MB in size (of mostly jibberish). Last time this happened docker didn't like it a lot either - already running dockers worked fine but I was unable to start/stop new ones (docker deamon seems to crash - impossible to check syslog since that stops working too). Any good ideas how to prevent docker logs from balooning like they seem to do for me?
  11. Thanks for responding. I still think it would be worthwhile to list as a known defect in the first post. EDIT : Or call it a known issue. Whatever. Just please, lets not fill up this thread with pointless chatter. No matter if this is an issue with the NZBget code or a docker issue, listing that sucker right up front would keep the thread more reader-friendly IMHO - just a friendly suggestion.
  12. I am having an issue with scheduled tasks not happening at local time. I need to enter scheduled times at GMT. I bet I'll have to go and edit them once we switch to daylight savings time too. Not a big deal, but I didn't have that problem with needo's docker back when I used that. If this is an acknowledged/known issue, please consider listing it in the top post as a known defect. And if you fix this, please make an announcement or update the top post. Thanks!
  13. Yes, Blake2 support in Bunker. Much less CPU intensive. A must if you want to several disks at the same time, or if you want plex transcoding while using it.
  14. You are right, mv should be working, and as it turns out in most cases it does work for me. I might have been mistaken - haven't run into it since. Sorry to cry wolf. Moving with MC works too (unless you are merging files into existing directories). Thank you for your elaborate note. However, while rsync'ing files (converting disks from ReiserFS to XFS), I ran into a problem with rsync - I apparently had some files with invaild extended attributes, and the way rsync handles that is to... not copy the files at all. So count your directory sizes defore deleting anything from old disks!! Here's what I got when trying to re transfer a folder that turned out smaller on the destination disk: root@FileServer:~# rsync -avX /mnt/disk1/Common/* /mnt/disk16/temp/Common/ sending incremental file list rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_1423.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_1433.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/MVI_7397.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/MVI_1433.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Jyllingeskole/913_1622_02.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1620_01.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1618_02.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/Pilegaardsskolen/913_1621_01.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) rsync: get_xattr_data: lgetxattr(""/mnt/disk1/Common/MindBlowing/Rasmus film/Mindblowing transformation (1)/ekstra/MVI_1423.MOV"","user.com.dropbox.attributes",159) failed: Input/output error (5) sent 2,017,059 bytes received 6,978 bytes 15,161.33 bytes/sec total size is 118,985,987,854 speedup is 58,786.47 rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0] I can copy those same files with MC no problem. Perhaps your way of doing things would end up in the source disk having everything else deleted and only the problem files left. Idunno. But your method doesn't store the checksum with the files for checking for bitrot later on - except if you run bunker as a separate step. I still think having checksums stored in a file in each directory would be simpler and more robust overall, and it would solve this issue as well.
  15. Hmmm, on my system, even when source and dest is on the same physical disk, the above results in reading and re-writing all the files, instead of simply renaming the top level dirs. Takes a long time and defeats the purpose of not deleting the source files until after the destination files have been verified. Am I missing something?
  16. With the import and check options only a file reference is given, no folder (this information is already in the file to be imported/checked). So syntax simply becomes : bunker -i -b2 -f /mnt/disk4/disk4blake2.txt Thank you. Makes sense. A problem I have noticed with using this tool is if I rename a folder, all files within that folder loose their extended attribute and thus their hash. This is a problem if you want to rsync your files to a new drive in a temp location and then later rename folders after files have been verified (this avoids having duplicate files while the transfer&verification is ongoing). There are a couple of workarounds, one of which is 1) Generate hashes on old (reiserfs) drive (bunker -a /mnt/disk1) 2) Copy to new (XFS) disk (rsync -avX /mnt/disk1 /mnt/disk2/temp 3) Verify files (bunker -v /mnt/disk2/temp 4) Export hashes from temp location (bunker -e -f /mnt/cache/disk2temp.txt /mnt/disk2) 5) Manually edit hash file to replace '/mnt/disk2/temp/' with '/mnt/disk2/' 6) Move files from temp to final location (mv /mnt/disk2/temp/* /mnt/disk2 or something along those lines) 7) Re-import hashes (bunker -i /mnt/cache/disk2temp.txt) However, it is also a problem when you want to reorganize your media library and rename/move many folders around. I'd love to hear of a solution that is more elegant than exporting hashes to one big file, then manually find&replace paths and then manually re-importing. I would also love it if bunker had the capability to store hash files per directory instead of the whole ext-attrib thing. This would elegantly avoid the above problem. It would also make it possible to generate hashes on the server, but from time to time verify a file over the network (with, say the corz tool). Is this a feature request that can be considered? Thanks again!
  17. I've been using Bunker for a while now - thank you gentlemen! But the -i (import) option isn't working for me, I just get an "Invalid Parameter Specified" error. I have tried bunker -i -b2 -f /mnt/disk4/disk4blake2.txt /mnt/disk4 Same story with -c (check). But -a (adding), -v (verify), -u (update) and -e (export) all worked fine. Are the import and check features simply not implemented yet?
  18. 2gb. Was a good amount 3 years ago, and it is hard to see that changing just because more is cheaply available (to people with ddr3 systems..)
  19. In the case I brought up, where unraid botched up while rebuilding a disk, there was a far better action to take. Reboot with a clean go script (no add-ons), and rebuild the disk again. Had I run a correcting parity check, I would not have had that option.
  20. Yeah, the theory is nice and all, but just a couple of releases back (4.6 and 4.7) there was a substantial bug in unraid that would cause a drive being rebuild to have errors in the very first part of it (the superblock I believe). This occurred for me several times, it is provoked by having addons running and accessing disks (changing the superblock) while the rebuild process starts. See my old topic on this: http://lime-technology.com/forum/index.php?topic=12884.msg122870#msg122870 Now, if you had that happen, then next time you run a correcting parity check, those errors will become permanent corruptions to the drive you had rebuild. I am very greteful to Joe for advising all of us to run NON-CORRECTING monthly parity checks, thanks to this my unraid server maintains a perfect record for never losing or corrupting any data (I was able successfully re-rebuild the disk in question by doing it without my addons running). Sure, the bug was eventually (after far, far , FAR too freaking long) corrected in unraid 5, but I say better safe than sorry. Non-correcting monthly parity checks are safest, and I would STILL like to see an option to automatically perform a non-correcting parity check after upgrading / rebuilding a disk.
  21. The other day, I was sorting some media, and somehow ended up with 5 duplicate files. This resulted in the the logger crashing and restarting less than a day after I had 'organized' those files. Looks to be because cachedirs keeps scanning the duplicated files, and each one is reported to syslog each time cache_dirs runs. Would have liket to attach a syslog, but... :-) Is it likely the cause is as I have described, and is there any way to avoid this (other than avoid creating duplicate files..) ? If not, can we talk about an extension to unrad_notify that would send an email when syslog gets over a certain size (or RAM filesystem is short on space) ?
  22. It hangs because unmenu is waiting for cache_dirs to return some output... which it will not because of the way it runs in the background. The post below your, or above mine as the case were, will work to start cache_dirs. I don't think that's accurate. When cache_dirs starts, it does output a string to the console. Looks like this on mine: cache_dirs process ID 5317 started, To terminate it, type: cache_dirs -q Further, I've now tried to invoke a script of mine that starts cache_dirs with my favorite arguments (that way it will always start with same arguments goth when called from go script and from unmenu). I've added an echo command - script looks like this: cache_dirs_args='-w -s -d 5 -e "Backup" -e "Games" -e "MP3BACKUP"' /boot/custom/cache_dirs/cache_dirs $cache_dirs_args echo "cache_dirs started in background with argsuments" $cache_dirs_args Still the same problem - unmenu hangs when I invoke my script (that I'm positively sure does output text). I can use the AT workarround (Thnx sacretagent), just curious as to why this happens with the way I had done it. Trying to learn here
  23. You make it look so easy, Joe So, do You have a specially customized Joe-version of the script for Your own use ? I have had cache-dirs on my to-do list for so long and only got around to installing it this week. Should have done it earlier, it's brilliant! Love the stop array detection, You are truly an artist at this. About the stop-detection, I have been thinking about adding some lines to kill other apps.. Not sure I want to keep waiting for 5.0 final.
  24. Thanks guys! While I'm at it, is there a way to use cache_dirs to cache folder.jpg's to prevent my mediaportal htpc spinning up drives when browsing folders ? Or am I looking at adding a "cp folder.jpg null" or something to that effect somewhere in the script ?
  25. How do You restart cache_dirs from unmenu I created a file "93-unmenu_user_script_start_cache_dirs" in the unmenu dir on the flash with the below content (in linux-style carridgereturn) and get a button in the user scripts #define USER_SCRIPT_LABEL Start cache_dirs #define USER_SCRIPT_DESCR Start cache_dirs script /boot/custom/cache_dirs/cache_dirs -w -d 5 But when I hit that user script button, unmenu hangs on me untill I invoke cache_dirs -q in a terminal session and also manually kill one last remauning instance of cache_dirs that shows up in a "ps -e | grep dir" listing. Why does unmenu hang like that? I've tried nohup'ing the command but no luck for me there - please help!
×
×
  • Create New...