heffe2001

Members
  • Posts

    411
  • Joined

Everything posted by heffe2001

  1. The supplied json file seems to include most of what I'm looking for (at least I've not missed anything because of it yet).
  2. I got a sub for the newznab plus to get a newznab ID, entered it on install of this docker, and it makes a huge difference over the public one that's included built in. Anybody having issues may want to give it a go. Just go to Newznab.com and purchase newznab plus, they will give you an ID number. Change the docker variable regex_url to: http://www.newznab.com/getregex.php?newznabID=<YOURIDNUM> I also set my backfill for 90 days, not sure if it helps or not. I'm able to actually search the indexer now and get results, and I'm using it with Couchpotato, Sonaar (nzbdrone), and several other downloaders. You do still get quite a few failures on the decoding, but you're actually able to download what you do scrape...
  3. Same on my system, antivirus (but I'm using NOD32). Only way to see the logs on my work system is to disable web protection. I've tried adding both the machine name and IP to the whitelist, but it still won't show. Finally found the culprit, it's my antivirus (F-Secure). No other option than to deactivate it completely to allow log windows to work Thanks all for the help and sorry as it seems to be unrelated to RC4 Hope this can help others anyway
  4. Got it up and running on my system, according to the stats it's finding stuff, but as for decoding the names, how would one add one of the better regex addons? Also, if you set it up initially with a 0 on the backfill variable, how would you go about changing it to say 30 days? I'm guessing that I'd need to wipe the install, and re-install with that in the settings?
  5. I'm trying it from my work machine, will try on my home machine and see if that changes anything. It did pull up logs here prior to RC4 though, and I was able to pull up one docker log earlier (I went straight to the docker page after I started array, and brought up a log for duckdns, which showed, then tried pysab, but that nor any other logs show now). Even running the command that the web log viewer uses to populate the screen on a command line hangs indefinitely. *EDIT* My home machine will open the logs fine. Never had any problem prior to RC4 on my work machine though. Yes all Dockers (4), VMs (1) and unraid log fails to show Logs are working, just the webgui fails to show them Have you tried a different machine? I have had (since back in the Dynamix days on 5.X) one machine that refuses to show the system or docker logs in the pop up window (it just shows a blank white screen). It doesn't matter what browser I use. On other machines, it displays fine in all the browsers. Perhaps this is related?
  6. Didn't help me, even tried under IE and Firefox, same result, white window.
  7. Hadn't noticed the main unraid logs showing up the same (white/blank window with waiting for server in the bottom), but can verify now that it's doing just that on any attempt to view logs from the webui. I will say I was able to open ONE log file when I rebooted my system, but any subsequent attempt shows the white window. I'm experiencing the same issue here since the upgrade from RC3 to RC4. 1. Dashboard view, opening the logs of a VM or Docker opens an empty window that never gets filled 2. Dashboard view, click on unraid log opens an empty window as well 3. Docker and VMs views, opening logs has the same result, a white window with no info Command line does work /usr/bin/tail -n 42 -f /var/log/syslog 2>&1 /usr/bin/docker logs --tail=350 -f PlexMediaServer 2>&1 ... Tested with Firefox 38 and IE 11
  8. Has anyone else lost the ability to see Docker logs? It opens the popup window, but never receives any data. Tried running the command-line version, and didn't get any output either (I've left the popup window up for an hour+ and no data filled).
  9. Just a FYI, it seems to be working much better with my headphones installation. Wonder if it was still working on setup when I was trying yesterday.. Should I re-install, or leave it as-is? I'm guessing it won't matter if I leave it as it is now since they don't update their stuff very often anyway..
  10. As far as indexing, I can remember having to do it with the VM they supplied, which will a full database and index was larger than the supplied VM's size (had to expand it to 60gb just to fit everything). If it generates the index files while doing the import that would most likely be sufficient. Mainly I just need the index that would work with the headphones searches, nothing else really. Same goes for replication, if it's on, that's all I need to know (I hated having to remember to log in and turn on replication on the VM version). For headphones settings: Musicbrainz Mirror: Custom Host: 192.168.0.5 Port: 5000 Requires Authentication (unticked) Sleep Interval: 1 Everything default aside from the host. Sometimes you can click a artist name, and it comes up with the album info for them, sometimes not. I had this problem frequently with the real musicbrainz lookups set, very infrequently with the headphones mirror, and very infrequently with the VM (if the VM was doing something intensive, it'd happen more often). It could be that I'm just doing too much on my unraid box, causing timeouts. I might look at the sleep interval and see if upping that at bit would help. I wish the 'musicbrainz lookup' buttons in Headphones actually used the mirror site you have set up, instead of going to the mail live site...
  11. Attempting it again now. Hopefully it'll finish it's run this time. It's installed, and I can bring up the Webui, but not sure if it's working correctly with Headphones or not yet. I've gotten a few musicbrainz errors logged (can't find artist in this category or something to that effect, I'll keep an eye and see if I can get an exact message. Bad thing about using Mediabrainz with headphones, not much in the way of logging of errors. Does the system automatically generate the indices, or do we need to force that? Same goes for replication, is it automatic, or do we need to start it? **EDIT** Found a mention of the musicbrainz error in Headphones logs, not much info here, lol: 21-May-2015 15:41:38 - INFO :: Thread-36 : [skid Row] There was either an error pulling data from MusicBrainz or there might not be any releases for this category
  12. Giving me an error on first startup. I verified that I had all the data downloaded from the musicbrainz server, but it looks like the import routines are not finding some of the files that are there (verified twice, downloaded twice). Log of the offending errors attached.. musicbrainz.txt
  13. It's in beta, but allows more private trackers in Sonaar via the Torznab protocol. Latest version is supposed to running on Linux (I'm guessing from a source build? Uses mono for linux). It'd definitely be nice to have along with Sonaar to pull down my stuff from my private trackers that Sonaar doesn't yet support. https://www.reddit.com/r/trackers/comments/333wax/jackett_app_additional_trackers_for_sonarr/ https://github.com/zone117x/Jackett
  14. My Windows 10 box doesn't see the unraid box by name, but works fine by IP address. Not a VM though, it's on bare metal. Do you have the link for the .103 virtio drivers? All I can find is .100 and .96. The .100 worked previously, but perhaps that was just luck. I'm not passing any hardware either. https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/ Use the Windows 8.1 drivers from the .103 driver package. Load the net, disk, and baloon drivers before you load Windows 10.
  15. I'm eager to see this up and running. Wonder how hard it'd be to just move my database from my VM to the docker?
  16. Try using the supplied VM file, and do a full update including artwork. The small image size they used won't hold the latest database with all data. I'd definitely rather have a docker of this than the VM I current use (I rolled my own from source, was a pain in the butt, but it's current as of a week ago, and seems to run fine, just uses more resources to run a whole VM rather than as a docker). My current data directory is just over 50g, if that helps you out..
  17. It's strange. What I found about 2267, is something about /etc/hosts. Not sure what that can be. I'm using /etc as read-only, and specifically /etc/ssmtp/ssmtp.conf, so that the mail process (ssmtp) can read it's configuration file (defined via unRAID). I had the same problem, but didn't have notifications turned on in Unraid. Set that up, and now unBalance works.. *EDIT* Upon further inspection, it's still crashing out (I saw that it had the green running ball, but didn't refresh to see that it was still crashing). *** Killing all processes... Traceback (most recent call last): File "/sbin/my_init", line 338, in main(args) File "/sbin/my_init", line 257, in main export_envvars() File "/sbin/my_init", line 84, in export_envvars with open("/etc/container_environment/" + name, "w") as f: FileNotFoundError: [Errno 2] No such file or directory: '/etc/container_environment/LC_ALL' I've tried it with /etc read only (the default), and RW, same error both times.
  18. Pipe going up front goes to the engine compartment, giving it access . Don't think I'd want that near the gas tank myself, lol.
  19. http://www.ebay.com/itm/areca-ARC-1200-2-Port-PCI-Express-x1-SATA-Controller-Card-/151649601937? Asking 130 buy-it-now, or bid at 99.99. Still brand new in box, not installed or powered on. Comes with all accessories that were included with it (both profile brackets, both sata cables, books and disk, plus box). Bought it to make a raid0 parity setup, but ended up going with a 1231ML and replacing one of my MV8 cards in that machine (out of x1 ports on my board, so had to go that route). I'm just out of return time from Newegg, so my loss is your gain. For completeness-sake, I'm located in the US.
  20. Can't seem to get Unraid to use the raid0 array as parity. It will let me select it, and doesn't give a size-error, but when I click the checkbox to accept the configuration, and hit start, it never gets to the part where it mounts the file systems. Looks like to me the relevent bits of the attached syslog are: Apr 13 15:32:10 media01 emhttp: shcmd (195): sgdisk -Z /dev/sdb &> /dev/null Apr 13 15:32:11 media01 kernel: sd 6:0:0:0: [sdb] 1953509120 4096-byte logical blocks: (8.00 TB/7.27 TiB) Apr 13 15:32:11 media01 kernel: sdb: sdb1 Apr 13 15:32:11 media01 kernel: sdb: p1 size 34359738360 extends beyond EOD, truncated Apr 13 15:32:11 media01 emhttp: shcmd (196): sgdisk -o -a 64 -n 1:64:0 /dev/sdb |& logger Apr 13 15:32:12 media01 logger: Creating new GPT entries. Apr 13 15:32:12 media01 logger: The operation has completed successfully. Apr 13 15:32:12 media01 emhttp: shcmd (197): udevadm settle Apr 13 15:32:12 media01 kernel: sd 6:0:0:0: [sdb] 1953509120 4096-byte logical blocks: (8.00 TB/7.27 TiB) Apr 13 15:32:12 media01 kernel: sdb: sdb1 Apr 13 15:32:12 media01 emhttp: invalid partition(s) Apr 13 15:32:12 media01 emhttp: shcmd (198): :>/etc/samba/smb-shares.conf Apr 13 15:32:12 media01 avahi-daemon[20909]: Files changed, reloading. Apr 13 15:32:12 media01 emhttp: Restart SMB... Wonder if it's because I used 4k block size instead of 64b LBA? Going to try changing it and see how it goes.. **EDIT** That was it, changed to 64b LBA and it allowed me to add as parity drive. Generating parity at just shy of 100MB/s.. syslog.txt
  21. Running a preclear on my 2 4tb Red's in a Areca raid0 array, using BJP's modded preclear and getting right around 305MB/s... Soon as that process has finished, I'll assign it as my Parity drive and see how long it takes to generate parity for my currently (unprotected ) array of 9 disks (+parity drive). With the Areca naming fixes applied, the array is identified by more than just a number (same for drives assigned to that controller, the ones I put on it with data actually are able to see and read the data as well, was semi-worried that those drives would need to be reformatted coming from a SM MV8 card, just assign them as pass-through in the Areca setup, assign them and they worked without issue. Name assigned by unraid to the Areca volume: ARC-1231-VOL#00_0000002794297501 Now if Limetech could fix the temp display of drives attached to the Areca controller in Dynamix, I'd be a happy camper..
  22. The case is just slightly taller than the width of the backplate, and it's mounted vertically in the case. No room for an additional 1.7" for the board. If the manufacturer would modify the back plate like the poster above, it'd be simple to put that board in that case, it's even got the 8097 ports on board that would handle all the slots on the case already.. You can see the back plate in this image:
  23. There's a thread on here (http://lime-technology.com/forum/index.php?topic=32704.0) where the OP ends up modifying the case back plate to house the extended mini-itx board, I'd rather just put something in it that works without much 'work', lol. Buying memory wouldn't be a huge deal, but 16g modules aren't exactly cheap. Also looks like most Mini-itx boards only 'officially' support 16g. As of right now, my system seems to be running pretty well (preclear was using about 20-25%), although I'd love to move over to that case, much smaller than my Chenbro SR107..
  24. Finally done, lol. ================================================================== 1.15 = unRAID server Pre-Clear disk /dev/sdl = cycle 1 of 1, partition start on sector 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 26C, Elapsed Time: 77:31:41 ========================================================================1.15 == ST8000AS0002-1NA17Z Z8402DA1 == Disk /dev/sdl has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdl /tmp/smart_finish_sdl ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VA LUE Raw_Read_Error_Rate = 116 100 6 ok 104894 064 Seek_Error_Rate = 73 100 30 ok 212962 22 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 74 75 45 ok 26 Temperature_Celsius = 26 25 0 ok 26 Hardware_ECC_Recovered = 116 100 0 ok 104894 064 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change.
  25. The standard old 1.15 I think, not the one he updated. It's at 95% at the moment, 74:55:47, speed is around 91MB/s.