Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About Necrotic

  • Rank
    Advanced Member


  • Gender
  • Location
    East Coast, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Necrotic

    New Emby Docker

    For those of you having issues with crashes, I highly advise you seek support at the Emby forums. They don't really monitor this thread much from what I have seen, but you can get excelent support at theirs. Here is the thread specific for the docker: https://emby.media/community/index.php?/topic/9754-docker/
  2. Necrotic

    unRAID OS version 6.5.2 available

    I followed the instructions and upgraded from 6.3.5 to 6.5.2 without any issues so far. Good job guys!
  3. Necrotic

    High Memory Usage Alert

    This seems like something that could be done thru a plugin
  4. Necrotic

    Preclear plugin

    No I mean I get that spam without running pre-clear. Its just constantly adding to my log all the time.
  5. Necrotic

    Preclear plugin

    Hi everyone, for some reason pre-clear is using up most of my log space. Has anyone experienced this? PS. I'm running version 6.3.5. Not updated yet since I wasn't so confident on stability and such. root@unRAID:~# df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 384M 356M 29M 93% /var/log root@unRAID:~# du -sm /var/log/* 1 /var/log/PhAzE-Logs 1 /var/log/apcupsd.events 1 /var/log/apcupsd.events.1 1 /var/log/apcupsd.events.2 1 /var/log/apcupsd.events.3 1 /var/log/apcupsd.events.4 0 /var/log/btmp 0 /var/log/btmp.1 0 /var/log/cron 0 /var/log/debug 1 /var/log/dmesg 2 /var/log/docker.log 1 /var/log/faillog 1 /var/log/lastlog 0 /var/log/libvirt 0 /var/log/maillog 0 /var/log/messages 0 /var/log/nfsd 3 /var/log/packages 0 /var/log/plugins 351 /var/log/preclear.disk.log 1 /var/log/removed_packages 1 /var/log/removed_scripts 0 /var/log/samba 1 /var/log/scripts 0 /var/log/secure 0 /var/log/setup 0 /var/log/spooler 1 /var/log/syslog 2 /var/log/syslog.1 1 /var/log/wtmp This is what I can see when I do a tail, its adding it every 10 seconds or something: Thu Apr 5 17:51:32 EDT 2018: get_content Finished: 0 Thu Apr 5 17:51:43 EDT 2018: Starting get_content: 0 Thu Apr 5 17:51:43 EDT 2018: Disks: + /dev/sdd => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + /dev/sdc => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + /dev/sdb => /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + /dev/sdf => /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + /dev/sdg => /dev/disk/by-id/ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + /dev/sde => /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + /dev/sdh => /dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + /dev/sda => /dev/disk/by-id/usb-Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: unRAID Serials: + 0951-168A-4910-BD30C95403FF + WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: unRAID Disks: + /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX11DC57SFX7 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2277054 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2574503 + /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0303087 + /dev/disk/by-id/ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2737524 + /dev/disk/by-id/ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4KUF1J + /dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K + /dev/disk/by-id/usb-Kingston_DT_Micro_1C6F654E4910BD30C95403FF-0:0 Thu Apr 5 17:51:43 EDT 2018: benchmark: get_unasigned_disks() took 0.004354s. Thu Apr 5 17:51:43 EDT 2018: benchmark: get_all_disks_info() took 0.004442s.
  6. Necrotic

    [PhAzE] Plugins for Unraid 5/6

    No idea, I remember the first time it started it looked ok, I had done the same thing as you I think. But as soon as it tried to rescan it basically wiped everything from the database and I think it doubled up my appdata folder. I just didn't want to have it re-downloading everything.
  7. Necrotic

    [PhAzE] Plugins for Unraid 5/6

    Make sure you restart and look at it again. When I first moved it over it seemed fine but when I did a refresh it went haywire and wiped my database, its why I had to do the whole process of editing the database...
  8. Did anyone else have issues with cachedirs going nuts and pegging one cpu to 100% forever? It happens rarely but over the past year or something had it happen twice. I went into settings, disabled it and re-enabled and it fixed it.
  9. Did you get a server to work? which docker did you use and what was your experience? Thanks! Edit: Nvm, it seems the reason it doesnt work is because steamcmd is 32bit and Unraid is running 64bit without enabling the emulation for 32.
  10. Necrotic

    [Support] Linuxserver.io - SABnzbd

    Thanks. I got the following: Done, had to relocate 11 out of 235 chunks The cache now says: Data, single: total=220.01GiB, used=68.10GiB System, single: total=4.00MiB, used=48.00KiB Metadata, single: total=2.01GiB, used=619.44MiB GlobalReserve, single: total=285.98MiB, used=0.00B Now the big question is, how do I get it back to xfs without messing everything up?
  11. Necrotic

    [Support] Linuxserver.io - SABnzbd

    Well shoot, I think this is what it did automatically. I am in the wrong place then for this post then...Do you think I should go and repost, if so where? I did manage to go in and wipe the recycle bin, that cleared some space and seemed to give me some breathing room.
  12. Necrotic

    [Support] Linuxserver.io - SABnzbd

    Edited my post. Seems like my entire system just went on the fritz....
  13. Necrotic

    [Support] Linuxserver.io - SABnzbd

    Yes, which diagnostics in particular? the SMART report? If so here it is: Samsung_SSD_850_EVO_250GB_S21NNXBGA75793K-20171216-0155.txt Edit: Just in case here is the unraid one: unraid-diagnostics-20171216-0158.zip
  14. Necrotic

    [Support] Linuxserver.io - SABnzbd

    I am having some problems with this docker. All of the sudden I am getting the following error on the dashboard: "Too little diskspace forcing PAUSE" Both the incomplete and complete folders are bound to my cache drive, which has 100GB+ of free space yet its happening all the time now. I was able to force it to finish a download by pausing and unpausing multiple times, but its not happening all over. Below is the section in the code where it pauses. 2017-12-15 19:56:17,177::INFO::[directunpacker:265] DirectUnpacked volume 33 for aBtwz76srBc3AATP 2017-12-15 19:56:23,205::WARNING::[assembler:77] Too little diskspace forcing PAUSE 2017-12-15 19:56:23,205::INFO::[downloader:277] Pausing 2017-12-15 19:56:23,205::INFO::[directunpacker:445] Aborting all DirectUnpackers 2017-12-15 19:56:23,205::INFO::[directunpacker:372] Aborting DirectUnpack for aBtwz76srBc3AATP Does anyone have suggestions about what is going on? sabnzbd (2).log
  15. Necrotic

    [Request] Docker not dissapear on segfault

    Ok thanks! Still learning dockers