I often login to some vanilla unix servers running a public IP and see the following welcome:
Last login: Wed Mar 29 14:42:22 on ttys000 Stevens-MacBook-Pro:~ steven$ ssh root@somedomain.com Last failed login: Wed Mar 29 16:31:36 EDT 2017 from 61.177.172.46 on ssh:notty There were 13904 failed login attempts since the last successful login. Last login: Tue Mar 28 09:41:19 2017 from 67-8-248-179.res.bhn.net
As an old school system administrator this is a scary welcome. I do not want ANY bots, scripts, or other ips attempting logins on my servers. There are many ways to block, firewall, redirect, and be proactive to lessening these types of login attempts. What kinda fun is that? That also doesn’t change the fact that they are still trying to “hack” anything online with an IP address. I do not like hackers or login bots and I have a lot of IP addresses.
Today I had an idea to trap the login bot’s ip addresses, log them to a public website, while automating an abuse complaint back to the originating ISP. My goal in making this setup will be something I can install into many systems and collect thousands of ips. Maybe even a unix flavor repo I can yum install as tests on various systems I currently administrate. Some of my other goals here are to use basic command line commands and basic procedures. Eventually I would like to have a system of reporting the IPs off the server via an API versus using a local database. For that some transport method will be pre-requisite anyway so for now I do not want to focus on that.
First thing I do is create a database `failed_logins`
.`ips
` and then find some sweet command lines that will process my /var/log/secure and the log rotate /var/log/secure-[date] files for “failed”, count, and create a file of IPs and counts. For now I am only concerned with the ip and # of times its trying to login. I could get timestamps for all failures, what ports, usernames, but again, moving fast, pass. The command I settled on was:
awk '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%3d\t%s\n", x[i], i}}' /var/log/secure | sort -nr > ips.data
Which gave me this:
23196 61.177.172.46 17304 118.212.135.3 14671 61.177.172.32 8247 116.31.116.5 4946 222.59.162.10 4022 61.177.172.53 3420 223.99.60.46 3407 116.31.116.27 2334 218.65.30.124 1942 116.31.116.20 1632 116.31.116.23 1482 218.87.109.152 1239 61.177.172.59 ...
There was 289 total Ips with login failures in this current log (7 days logrotate). This is about normal for a public server these days.
Second, I need to insert this data into mysql via some SQL (automate.sql). I am not going to do any functional programming to the data so this is a full insert only.
LOAD DATA LOCAL INFILE 'ips.data' INTO TABLE `failed
_logins`.`
ip`
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
(count, ip);
so then I add this to my command line:
mysql failed_logins < automate.sql
now I can see the data in SQL:
Third, I need a way to process this data to have it ready for the final goals: available for website and a notification back to ISP. Since data will be processed weekly, stored indefinitely, and it is possible for an IPs login attempts to continue through log rotations I will need counts to add up over time here too (not just insert). To do this at the SQL level I will be using some very clever queries and moving the final IP and Count data into the table `failed_logins`
.`fail2notify
` using current table as a staging platform for the inserts. Once the new data is ready the following SQL commands will process it into its permanent home:
// get any counts for existing ips, and increment, them
UPDATE `failed_logins`.`fail2notify` a LEFT JOIN `failed_logins`.`ip` b ON b.`ip` = a.`ip` SET a.`count` = a.`count` + ( SELECT SUM( `count` ) FROM `failed_logins`.`ip` WHERE `ip` = a.`ip` )
// delete those ips effected above
DELETE b FROM `failed_logins`.`fail2notify` a LEFT JOIN `failed_logins`.`ip` b ON b.`ip` = a.`ip` ;
// insert the remain new ips
INSERT INTO `failed_logins`.`fail2notify` ( `ip` , `count` ) SELECT `ip` , `count` FROM `failed_logins`.`ip`;
// delete the inserted data remaining
DELETE b FROM `failed_logins`.`fail2notify` a LEFT JOIN `failed_logins`.`ip` b ON b.`ip` = a.`ip` ;
After processing the first round of data, I have moved 289 ips into `failed_logins`.`fail2notify` and emptied the staging platform table `failed_logins`.`ip`. Notice now that I have included a timestamp and an ID necessary for some programming later:
If I run this again, it will process 289 again, seeing them as new data, and incrementing the counts again as I would expect if I am running a new log file. This time I run SQL all as one and verify counts are doubled:
The fourth part starts with evaluating what we need to do for programming. I need to automate lookup of each IP Addresss in order to find the IPs registered abuse address. This information will be used to formulate the automated message containing the IP, number of Attempts, and timestamp. I will need to be able to record that I have already notified the ISP this week so that will require another table`failed_logins`.`notifications`.
To send an abuse notification I need to do a whois on the ip and get the abuse email addresses. This is going to require some programming knowledge and the programming learning what whois server we need to use for different international IPs. For USA addresses “whois -h whois.arin.net 61.177.172.46” works. However for our # 1 example: we need this command to see all the emails:
whois -h whois.apnic.net 61.177.172.46 inetnum: 61.177.0.0 - 61.177.255.255 netname: CHINANET-JS descr: CHINANET jiangsu province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: CJ186-AP mnt-by: MAINT-CHINANET mnt-lower: MAINT-CHINANET-JS mnt-routes: maint-chinanet-js changed: hostmaster@ns.chinanet.cn.net 20020209 changed: hostmaster@ns.chinanet.cn.net 20030306 status: ALLOCATED non-PORTABLE source: APNIC role: CHINANET JIANGSU address: 260 Zhongyang Road,Nanjing 210037 country: CN phone: +86-25-86588231 phone: +86-25-86588745 fax-no: +86-25-86588104 e-mail: ip@jsinfo.net remarks: send anti-spam reports to spam@jsinfo.net remarks: send abuse reports to abuse@jsinfo.net remarks: times in GMT+8 admin-c: CH360-AP tech-c: CS306-AP tech-c: CN142-AP nic-hdl: CJ186-AP remarks: www.jsinfo.net notify: ip@jsinfo.net mnt-by: MAINT-CHINANET-JS changed: dns@jsinfo.net 20090831 changed: ip@jsinfo.net 20090831 changed: hm-changed@apnic.net 20090901 source: APNIC changed: hm-changed@apnic.net 20111114 person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: anti-spam@ns.chinanet.cn.net address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: dingsy@cndata.com 20070416 changed: zhengzm@gsta.com 20140227 mnt-by: MAINT-CHINANET source: APNIC % Information related to '61.177.0.0/16AS23650' route: 61.177.0.0/16 descr: CHINANET jiangsu province network country: CN origin: AS23650 mnt-by: MAINT-CHINANET-JS changed: ip@jsinfo.net 20030414 source: APNIC % This query was served by the APNIC Whois Service version 1.69.1-APNICv1r0 (UNDEFINED)
Now we could proceed with creating a mail message to:
abuse@jsinfo.net
I am going to use local server mail to deliver, but in some situations it would be wiser to do some kind of api or delivery managed email address. For this purpose a fast pass to command line:
echo "The following IP address has attempted 23,317 login attempts on our networks as of 2017-03-31 12:29:10. Please take the necessary actions to prevent these malicious logins. Thank you, fail2notify system http://fail2notify.com" | mail -s "Abuse Notification for IP 61.177.172.46" abuse@jsinfo.net
Fifth and final for the day on this topic, I need a website. I decided to go with fail2notify.com as the name. If you research this failed login topic a popular solution is fail2ban. This is a great solution to ban an IP after a certain # of logins and I will probably use it. However, my idea is that once the IP is in the fail2notify trap, they will not be able to access any system running fail2notify. Ever.
My quick goal for the website is an easy one page application that looks like old school green screen program. I start with a bootstrap core, and quickly add some basic black and green css adjustments. Once I have a framework looking nice I want to bring in the data to display in a table. Seems like a pretty good time for DataTables. This will allow me to load the data externally from the user interface, include search, and pagination. I quickly get datatables working with some sample json data (data.txt) and now I have a website:
The next part is getting prepared for programming automation. I need a program to create a real json data.txt file that contains the full 289 results in the database table. At this time the application has zero programming, just the index, datatables, and the sample data.txt I made. To end the day I use phpmyadmin to create some json and quickly start wasting time trying to get that formatting correct for datatables. Once I had that sorted and some adjustments to the jquery: Viola!!
Day 2: 04/01/2017
Today I want to work on a few things; processing all rotated logs for March , create data.txt to show live data in datatables, and work on the process design for the notification to the ISP.
I can easily import data for all the existing logs by running my commands on each of existing log files. I empty the databases then created the ips.data files one at a time. Next I import and process each log to get all of the Ips and counts into `failed_logins`.`fail2notify`. I had just one minor adjustment to make in the SQL commands:
UPDATE `failed_logins`.`fail2notify` a
LEFT JOIN `failed_logins`.`ip` b ON b.`ip` = a.`ip`
SET a.`count` = a.`count` + ( SELECT SUM( `count` )
FROM `failed_logins`.`ip`
WHERE `ip` = a.`ip` )
WHERE b.ip IS NOT NULL;
This was causing any IPs spanning more than one file to get a count of 0 on the second execution. With this completed I now have 1451 total IPs ranging in counts from 1 failed login to 40,540 failed logins. This data only covers the end of Feb and first 3 weeks of march. The last week of march will not process until it is rotated (4/3/17). After that rotation happens I will test the automation again and prepare to run automatically on all future rotations (4/7/17 and on).
Next I want to focus on getting this data available on the website. Last nite I left the site connected to a manually created data.txt file. Now I want that data to be delivered from a live call into the database for results. I was planning to use ruby on rails for this task, at the API as the application level, but that is another server setup that is hours away. I even further researched capabilities of going from mysql to json, tried a few ways, even some advanced queries, but was not able to wrap object array data directly like I needed into a local file without a programming language.
I need to move faster so I quickly create local php script to get sql, save json to data.txt. This only needs to be done when automation finishes, again one time per week. It’s almost funny how much time was wasted to test/research other ways when this is as simple as 9 lines of php:
<?php $sql = "SELECT `count`,`ip`,`timestamp` FROM `fail2notify` ORDER BY `count` DESC"; $db = new PDO ( "mysql:dbname=failed_logins;host=localhost", 'username', 'password') ; $stmt = $db->prepare($sql); $stmt->execute(); $result = $stmt->fetchAll(); $output = array(); $output['data'] = $result; file_put_contents("/home/fail2notify/public_html/data.jsn", json_encode($output)); ?>
I then adjusted index.html to use data.jsn. While I am in here I also adjust the sorting desc on counts as datatables seems to ignore the data order in the query above. Now I have the website showing all 1451 results with the highest count on top:

In order to view sample whois lookups I am going to make some processes that will get the IPs geo-data, and link to an ip whois report viewable in the screen. Being able to see what that data looks like will help me in programming the notification system but also make content for the website too. Providing a link to the IP geo-data, and a link to the whois report, will create the website with number ips times 2 pages of content.
I found an API for ip data in json format:
http://ip-api.com/json/223.99.60.46
It should be possible during the process of my data to fetch the output of above and store that as a json data type in SQL. Now lets find a whois API that has json output:
http://adam.kahtava.com/services/whois.json?query=223.99.60.46
I will need to append the output of each of these into ips.data file and then modify `failed_logins`
.`ips
` and `failed_logins`.`fail2notify` to include 2 new json fields. Then this data will flow through to the application. Including this all the way out to data.jsn in the application will make the file loading into the user interface very large. This poses some serious points:
When will it be too large to load in a reasonable time?
Why not just fetch the data from the source links above directly in the U/I?
What happens when the stored information changes, etc?
For the website purposes I will keep that data light with the current 3 fields, and only make the deeper data available from buttons in the U/I upon request. I do not need these to be realtime lookups. It will be okay to store just a snapshot of this data at time the ip notifies out. I am not trying to maintain that information historically just show what was captured while having text/content in the website. The only functional purpose of my lookups is to fetch the abuse email address.
Day 3: 04/03/2017
Today I start out with combining some of my working commands into a single line to process the latest log rotation, process that data to SQL, and make a new public data.jsn file. This is most of the automation to run about 4am sunday morning:
awk '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\n", x[i], i}}' /var/log/secure-20170403 | sort -nr > /home/fail2notify/CronJobs/ips.data && /usr/bin/mysql failed_logins < /home/fail2notify/CronJobs/automate.sql && /usr/bin/php /home/fail2notify/CronJobs/makeData.php
after executing this command I now have a total of 1807 failed logins:
Next I want to roll through all of these ips and store the output of the first API Call (ip_data):
http://ip-api.com/json/116.31.116.5
This was a bigger battle at the server level to get mysql working with the json data type. There was also quite a few issues with phpmyadmin and the json data type. This turned into issues with mysql data formats along the way to upgrading to php 56 and current version phpmyadmin. Once I got everything sorted my script is now running all the ip_data jsn into mysql.
Later I will add the country to the datatables and begin to build a method to view this full json objects inside of the application:
{"as":"AS134764 CHINANET Guangdong province network","city":"Shenzhen","country":"China","countryCode":"CN","isp":"China Telecom Guangdong","lat":22.5333,"lon":114.1333,"org":"China Telecom Guangdong","query":"116.31.116.5","region":"44","regionName":"Guangdong","status":"success","timezone":"Asia/Shanghai","zip":""}
Next I am going to add the ip_whois and store that information into mysql. I already noticed that sample above does not have the right abuse email for chinese addresses. I am going to run them all through and see how many are China and then try to find a China whois api for the second API Call (ip_whois):
http://adam.kahtava.com/services/whois.json?query=116.31.116.5
This process for whois data was easy to create using the first one as an example. Later for automation I will likely combine them. For now they are running separately and processing data in the background. Both are taking a good deal of time to complete. I am adding some 2 second sleep calls in each loop so that this script is not hitting the source urls too many times per second.
With the scripts running I switch over to the application and start adding Country to the table view. I need to get that value from the json data type and I can do that with some very kewl new SQL:
SELECT ip, JSON_UNQUOTE( JSON_EXTRACT(ip_data, ‘$.country’) ) AS `country` FROM `fail2notify`
I can now adjust my makeData script to send the country and datatables to show it:
Now that I have some whois data rolling through, I want to search it and see if I can find that abuse keyword in the output. Here are a few lookups:
SELECT * FROM `fail2notify` WHERE ip_whois LIKE '%abuse@%' SELECT * FROM `fail2notify` WHERE ip_whois LIKE '%abuse%' SELECT * FROM `fail2notify` WHERE ip_whois LIKE '%AbuseContact%' (504)
SELECT DISTINCT JSON_UNQUOTE( JSON_EXTRACT(ip_whois, '$.RegistryData.AbuseContact.Email') ) AS `abuse` FROM `fail2notify` WHERE ip_whois LIKE '%AbuseContact%' ORDER BY `abuse` DESC
wildblueabuse@viasat.com |
whois-contact@lacnic.net |
spam@nyp.org |
search-apnic-not-arin@apnic.net |
noc@psychz.net |
intl-abuse@list.alibaba-inc.com |
google-cloud-compliance@google.com |
awmap@avidbill.com |
abusepoc@afrinic.net |
abuse@vultr.com |
abuse@telus.com |
abuse@rr.com |
abuse@ripe.net |
abuse@pldi.net |
abuse@microsoft.com |
abuse@interserver.net |
abuse@digitalocean.com |
abuse@amazonaws.com |
abuse@alibaba-inc.com |
abuse-mail@verizonbusiness.com |
search-apnic-not-arin@apnic.net
SELECT id,ip,count, JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country`,ip_data,ip_whois, JSON_UNQUOTE( JSON_EXTRACT(ip_whois, '$.RegistryData.AbuseContact.Email') ) AS `abuse` FROM `fail2notify` WHERE ip_whois LIKE'%abuse@ripe.net%'
whois -h whois.ripe.net 151.235.169.142
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf
% Note: this output has been filtered.
% To receive output for a database update, use the “-B” flag.
% Information related to ‘151.235.64.0 – 151.235.255.255’
% Abuse contact for ‘151.235.64.0 – 151.235.255.255’ is ‘abuse@tcf.ir’
inetnum: 151.235.64.0 – 151.235.255.255
descr: Telecommunication Company of Tehran
netname: ORG-TCOT2-RIPE
country: IR
admin-c: MS29582-RIPE
tech-c: MS29582-RIPE
admin-c: RK9057-RIPE
tech-c: RK9057-RIPE
status: ASSIGNED PA
mnt-by: MNT-TCF
created: 2016-06-14T07:08:35Z
last-modified: 2016-06-14T07:08:35Z
source: RIPE
person: Mehdi Siahi
address: Ghasrodasht 7183893995 Shiraz IR
phone: +987116112145
nic-hdl: MS29582-RIPE
mnt-by: MNT-TCF
created: 2012-07-25T12:50:36Z
last-modified: 2013-04-09T05:03:23Z
source: RIPE
person: reza khalili
address: telecommunication company of Tehran
phone: +982188294266
nic-hdl: RK9057-RIPE
mnt-by: MNT-TCF
created: 2016-02-06T07:45:46Z
last-modified: 2016-02-06T07:45:46Z
source: RIPE
% Information related to ‘151.235.128.0/18AS12880’
route: 151.235.128.0/18
descr: TIC
origin: AS12880
mnt-by: AS12880-MNT
created: 2016-02-09T09:05:55Z
last-modified: 2016-02-09T09:05:55Z
source: RIPE
% Information related to ‘151.235.128.0/18AS59587’
route: 151.235.128.0/18
descr: Telecommunication Company of Tehran
origin: AS59587
mnt-routes: AS12880-MNT
mnt-by: MNT-TCF
created: 2016-02-06T08:05:32Z
last-modified: 2016-02-06T08:05:32Z
source: RIPE
% This query was served by the RIPE Database Query Service version 1.88 (HEREFORD)
SELECT id,ip,count, JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country`,ip_data,ip_whois, JSON_UNQUOTE( JSON_EXTRACT(ip_whois, '$.RegistryData.AbuseContact.Email') ) AS `abuse` FROM `fail2notify` WHERE ip_whoisLIKE'%abusepoc@afrinic.net%'
whois -h whois.afrinic.net 41.99.16.23 % This is the AfriNIC Whois server. % Note: this output has been filtered. % To receive output for a database update, use the "-B" flag. % Information related to '41.99.0.0 - 41.99.255.255' % No abuse contact registered for 41.99.0.0 - 41.99.255.255 inetnum: 41.99.0.0 - 41.99.255.255 netname: Fawri-Oran12 descr: Fawri pour Oran 1 et 2 country: DZ admin-c: SD6-AFRINIC tech-c: SD6-AFRINIC status: ASSIGNED PA mnt-by: DJAWEB-MNT source: AFRINIC # Filtered parent: 41.96.0.0 - 41.111.255.255 person: Security Departement address: Alger phone: +21321911224 fax-no: +21321911208 nic-hdl: SD6-AFRINIC source: AFRINIC # Filtered % Information related to '41.96.0.0/12AS36947' route: 41.96.0.0/12 descr: Algerie Telecom origin: AS36947 mnt-by: DJAWEB-MNT source: AFRINIC # Filtered
SELECT id,ip,count, JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country`,ip_data,ip_whois, JSON_UNQUOTE( JSON_EXTRACT(ip_whois, '$.RegistryData.AbuseContact.Email') ) AS `abuse` FROM `fail2notify` WHERE ip_whois LIKE'%whois-contact@lacnic.net%'
whois -h whois.lacnic.net 138.185.94.11
% Joint Whois - whois.lacnic.net
% This server accepts single ASN, IPv4 or IPv6 queries
% Brazilian resource: whois.registro.br
% Copyright (c) Nic.br
% The use of the data below is only permitted as described in
% full by the terms of use at https://registro.br/termo/en.html ,
% being prohibited its distribution, commercialization or
% reproduction, in particular, to use it for advertising or
% any similar purpose.
% 2017-04-03 13:41:05 (BRT -03:00)
inetnum: 138.185.92.0/22
aut-num: AS264346
abuse-c: SIL207
owner: SOFTWAY INFORMATICA S/C LTDA
ownerid: 01.283.515/0001-09
responsible: LEON DENIZ BOLOGNESE
owner-c: SIL207
tech-c: SIL207
created: 20150703
changed: 20150703
nic-hdl-br: SIL207
person: Softway Informatica S/C Ltda
created: 20000816
changed: 20150507
% Security and mail abuse issues should also be addressed to
% cert.br, http://www.cert.br/ , respectivelly to cert@cert.br
% and mail-abuse@cert.br
%
% whois.registro.br accepts only direct match queries. Types
% of queries are: domain (.br), registrant (tax ID), ticket,
% provider, contact handle (ID), CIDR block, IP and ASN.
http://wq.apnic.net/whois-search/query?searchtext=116.31.116.5
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170403 | sort -nr > /home/fail2notify/CronJobs/ips.data && /usr/bin/mysql failed_logins < /home/fail2notify/CronJobs/automate.sql && /usr/bin/php /home/fail2notify/CronJobs/makeData.php
http://www.fail2notify.com/ip/116.31.116.40/ http://www.fail2notify.com/whois/116.31.116.40/
Day 4: 04/04/2017



Day 5: 04/17/2017
Stevens-MacBook-Pro:~ steven$ ssh root@somedomain.com Last failed login: Mon Apr 17 08:19:47 EDT 2017 from 116.31.116.33 on ssh:notty There were 246663 failed login attempts since the last successful login. Last login: Tue Apr 4 10:56:17 2017 from 67-8-248-179.res.bhn.net
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170409 | sort -nr > /home/fail2notify/CronJobs/ips.data && /usr/bin/mysql failed_logins < /home/fail2notify/CronJobs/automate.sql && /usr/bin/php /home/fail2notify/CronJobs/makeData.php
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170416 | sort -nr > /home/fail2notify/CronJobs/ips.data && /usr/bin/mysql failed_logins < /home/fail2notify/CronJobs/automate.sql && /usr/bin/php /home/fail2notify/CronJobs/makeData.php
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170409 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'FSkTX%VClXJD' -h somedomain.com failed_logins < ~/automate.sql
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170416 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'FSkTX%VClXJD' -h somedomain.com failed_logins < ~/automate.sql
At this point I am pretty confident in my commands. Today they executed without any issues on both platforms. At the end I now have 4,221 ips at the application level:
The next step is to process the ip json data with the getIpData and getIpWhois scripts. Since the 2nd script will use the country from the first script the first one is required to run before the 2nd one. These scripts properly run on their own now so I set the first one to run in screen and focus on some other tasks while it finishes.
First I want to add an Extreme Tracker to the html source code. I have no stats and no visibility if anyone other than myself is accessing the website. I have used Extreme Tracker on many sites so this is a quick task. In a few days I will check back and see if anything interesting is happening on the tracker page.
Second I want to start the programming for sending the notifications back to the IP owner. I am going to write the program initially to run against all IPs who have country = USA. I know that these are getting correct abuse emails in the IP whois data. I am able to very quickly get a script setup (sendNotifications) to send a test message to myself for the first sample IP. However, before I can start sending to real abuse emails I need to record data that the message is sent. I also need to be able to use this data to NOT send the complaint over and over again. Going further, when a new complaint comes in after the sent date, then we would want to be able to resend the complaint again.
After adjusting the scripts main query with a left join to check for an existing notification, or an older count notification I add the insert into notification table and test a few loops sending to myself. As I complete a notification, it is properly excluded from the next execution. I then empty the notification table and prepare my script to run and send emails to the real abuse address. I had 2 issues
- Notification Bounces
said: 554 Sending address not accepted due to spam filter (in reply to MAIL FROM command)
Once I had sent a few real messages I monitored my local mail box for bounce messages. I am going to need to do some work to the IP to make sure it can deliver messages without bouncing back.
I did a delisting at Barracuda, Sorbs, and INPS. It appears sometime back in 2015 the ip was used to send Spam.
I set rDNS at the host for my IP and hostname
- No Email Address
sendmail: fatal: root(0): No recipient addresses found in message header
I noticed while sending messages there are some rows of data with the ip_whois field = “null”. For some reason these are not being ignored by the IS NOT NULL in the original pickup query for sendNotifications. It is also not possible to query these with an sql statement ” where ip_whois = ‘null’ “.
I will pick back up again once I get further traction in delisting and I am ready to send messages again.
Day 6: 04/18/2017
This morning I start out by doing some work to fix data issues. I have some reported_by still NULL (automate.sql not updated on remote server to pass new field) and I have some ip_whois field value equal to “null” (issue with original database structure default). I manually fixed all of these and ran getIpWhois process again. When I am done I have about 50 IPs without any ip_whois data, and 11 Ips without any ip_data data. It appears that the lookup url does not respond with any data:
http://adam.kahtava.com/services/whois.json?query=73.221.181.176 (no json output)
With the data ready, I start sendNotifications for Unite States again and monitor the local mailbox for bounces. Out of 93 sent notifications there was less than 10 bounce issues. Some non existing emails and still some spam listing issues:
<whois-contact@lacnic.net>: host MAIL.lacnic.net[200.3.14.11] said: 550 5.1.1 <whois-contact@lacnic.net>: Recipient address rejected: User unknown in local recipient table (in reply to RCPT TO command) <navhaji@uscolo.com>: host a.mx.uscolo.com[204.9.200.40] said: 550 5.1.1 <navhaji@uscolo.com>: Recipient address rejected: User unknown in virtual mailbox table (in reply to RCPT TO command) <keith.beal@wcenet.net>: Host or domain name not found. Name service error for name=wcenet.net type=AAAA: Host not found <abuse@att.net>: host ff-ip4-mx-vip1.prodigy.net[144.160.159.21] said: 553 5.3.0 flpd567 DNSBL:ATTRBL 521< IP >_is_blocked.For assistance forward this email to abuse_rbl@abuse-att.net (in reply to MAIL FROM command)
Next I adjust sendNotifications to send everything but China. China will need to parse ip_whois json structure in a different way to get the abuse email address. The first two results are:
1-103.41.46.145 - search-apnic-not-arin@apnic.net 2-103.58.145.47 - search-apnic-not-arin@apnic.net
These are other countries than China reporting the same whois server. After reviewing the data I adjust sendNotification to run if the email is not (search-apnic-not-arin@apnic.net, whois-contact@lacnic.net, and abuse@ripe.net). I will handle these later. After 114 notifications I have just 3 bounces. This puts the total notifications up to 289. That leaves a giant number (3,000-4,000) in the realm of apnic, lacnic, and ripe.
756 Apnic (other than china)
802 Lacnic
837 Ripe
Next I adjust sendNotification for China by adding the following code to parse the json, find the china whois abuse-mailbox:
foreach($ip_whois as $array) { if(isset($array->attributes)) { foreach($array->attributes as $attributes) { if($attributes->name == 'abuse-mailbox') { $china_name = $attributes->name; $china_email = $attributes->values[0]; } } } }
This worked quite well and I again monitor the local mail box for bounces. There were 100+ bounce errors, mostly 550s and 554s but over 1000 notification messages sent.
<awmap@avidbill.com>: host mx1.zmailcloud.com[104.154.38.107] said: 552 [6BDE0145-EDA8-4750-83AE-4EB188278B07.1] no MX record for domain fail2notify.com (in reply to end of DATA command) <abuse@chinamobile.com>: host mx.chinamobile.com[221.176.66.77] said: 550 2ef058f61fb4d18-10ee1 Mail rejected (in reply to end of DATA command) <antispam_gdnoc@189.cn>: host mta-189.21cn.com[183.61.185.69] said: 554 IP in blacklist. (in reply to MAIL FROM command)
Day 7: 04/19/2017
I start today off with moving the fail2notify.com from its current IP (spam listed from Previous Owner in 2015) to a clean IP Address on its own cloud server. The cloud host I am using, hostkey.com has had another IP of mine offline for over 7 days with very spotty support regarding the issue. I will likely be moving all my assets off hostkey.com as a result. I create an Amsterdam based digital ocean node with centos7 and get php56, and mysql57 ready with just these commands:
yum update yum install nano screen mailx wget https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm sudo rpm -ivh mysql57-community-release-el7-9.noarch.rpm sudo yum install mysql-server mysql chkconfig mysqld on service mysqld start grep 'temporary password' /var/log/mysqld.log mysql_secure_installation mysql create database failed_logins; CREATE USER 'fail2notify'@'localhost' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON failed_logins.* TO 'fail2notify'@'localhost' WITH GRANT OPTION; FLUSH PRIVILEGES; sudo yum install epel-release wget http://rpms.famillecollet.com/enterprise/remi-release-7.rpm rpm -Uvh remi-release-7*.rpm cd /etc/yum.repos.d nano remi.repo sudo yum install php php-gd php-mysql php-mcrypt yum install httpd service httpd start chkconfig httpd on cd /etc/httpd/conf.d nano fail2notify.conf service httpd restart cd /home/fail2notify/ mysql failed_logins < failed_logins.sql hostname fail2notify.com exit
This now gives me 3 datacenters around the world to report auto logins. I am certain by sunday this new IP will have quite a few failed logins as well.
Now that I do not have to do deal with bouncing my notifications, I want to focus on the results of sending the notifications so far. Yesterday a total of 1526 total notifications were sent. Out of that about 10% actually bounced so I see that as pretty good. What I also noticed was that some emails, get A LOT of notifications from different ips. If fail2notify was to send an email for every IP, they may see this as SPAMMING. Fail2notify will need to deliver a single message with all their IPs and each IPs counts, and timestamps.
Picking up from yesterday with ApNic, LatNic, and Ripe I need to adjust getIpWhois similarly to how I have with Country = China. For Ripe all the countries are:
Showing rows 0 - 24 (52 total, Query took 0.1777 seconds.) SELECT DISTINCT JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country` FROM `fail2notify` WHERE ip_whois LIKE '%abuse@ripe.net%'
country |
---|
Japan |
Iran |
Russia |
Switzerland |
Sweden |
Slovenia |
Ukraine |
Latvia |
Belarus |
France |
Netherlands |
Germany |
United Kingdom |
Hungary |
Iraq |
Bulgaria |
Czechia |
Belgium |
Italy |
Turkey |
Czech Republic |
Saudi Arabia |
Poland |
Serbia |
Denmark |
I then go to whois.ripe.net and search the IP: 139.162.122.110 and find their JSON restful api:
http://rest.db.ripe.net/search.json?query-string=139.162.122.110&flags=no-filtering
The only update I need to do in my getIpWhois script is creating the country settings for ripe, and apnic, then comparing those during processing to get the correct whois url and json data into mysql. In validating all the data below, it appears that some IPs with countries (Us, Russia, France, Netherlands) have results across multiple whois. They are red below and need to be investigated.
Getting a list of apnic countries:
Showing rows 0 - 21 (22 total, Query took 1.0253 seconds.) SELECT DISTINCT JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country` FROM `fail2notify` WHERE ip_whois LIKE'%search-apnic-not-arin@apnic.net%'
country |
---|
India |
Nepal |
Australia |
Republic of Korea |
Philippines |
Taiwan |
Japan |
Indonesia |
Malaysia |
Hong Kong |
Vietnam |
China |
Singapore |
Netherlands |
France |
Thailand |
Pakistan |
Maldives |
Bangladesh |
Sri Lanka |
Laos |
Cook Islands |
Getting a list of countries for lacnic:
Showing rows 0 - 16 (17 total, Query took 0.7223 seconds.) SELECT DISTINCT JSON_UNQUOTE( JSON_EXTRACT(ip_data, '$.country') ) AS `country` FROM `fail2notify` WHERE ip_whois LIKE'%whois-contact@lacnic.net'
country |
---|
Brazil |
Colombia |
Argentina |
Honduras |
Ecuador |
Panama |
Mexico |
Peru |
Bolivia |
United States |
Venezuela |
Switzerland |
Chile |
Russia |
Uruguay |
Paraguay |
El Salvador |
I then go to whois.lacnic.net and search a few IPs. Some have emails, some do not. There is no API url or JSON. Need to process the page for the <PRE> tag, scrape data, and search for emails.
Moving forward with Apnic and Ripe I want to re-process these rows. I write a query to NULL the ip_whois when the current whois contains search-apnic-not-arin@apnic.net or whois-contact@lacnic.net:
1593 rows affected. (Query took 1.5779 seconds.) UPDATE `fail2notify` SET ip_whois = NULL WHERE ip_whois LIKE'%abuse@ripe.net%' OR ip_whois LIKE '%search-apnic-not-arin@apnic.net%'
Day 8: 04/24/2017
This morning is a log processing morning. I need to process logs on the original server, on the second server, and now the new server. I had to create some new remote host mysql users:
CREATE USER 'logger'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON failed_logins.* TO 'logger'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES;
-
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170424 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h [IP ADDRESS] failed_logins < ~/automate.sql
-
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170423 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h [IP ADDRESS] failed_logins < ~/automate.sql
-
awk -v HOSTNAME=$(hostname -I) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170423 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h [IP ADDRESS] failed_logins < ~/automate.sql
On #3, I could not get the command to work at all:
awk: fatal: cannot open file `/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' for reading (No such file or directory)
It turns out there were no matches for “Failed”, in this log the abuse mostly says as an example:
Invalid user tcpdump from 123.57.84.196
I then adjusted my command so that it works off that Invalid User string:
awk -v HOSTNAME=$(hostname -I) '/Invalid/ {x[$(NF-0)]++} END {for (i in x){printf "%d\t%s\n", x[i], i, HOSTNAME}}' /var/log/secure-20170423 | sort -nr > ips.data
then get a final command to operate as:
awk -v HOSTNAME=$(hostname -I) '/Invalid/ {x[$(NF-0)]++} END {for (i in x){printf "%d\t%s\n", x[i], i, HOSTNAME}}' /var/log/secure-20170423 | sort -nr > ips.data && /usr/bin/mysql -u logger -p'password' -h [IP ADDRESS] failed_logins < ~/automate.sql
I then run my processes getIpData and getIpWhois. After processing: 762 new ips.
At this point I believe the main commands should change to execute for both invalid user and failed login. This will likely increase the counts and # of ips greatly per log cycle. At start of this task the focus was user logins failing but any abusive ip action can be a count for reporting. With the “Invalid user” an ip that is trying many different users would never actually fail to login (invalid password). These are likely very low level login bots creating more requests on public ip addresses than root or existing user login attempts.
Day 9: 04/25/2017
Today I want to spend some time back on the sendNotification process. This main process needs to run for all distinct emails where the IP qualifies to notify. Then doing the process execution the notification summarizes all IP/Counts in a single message. To do this I create a distinct array of emails during the query to build ip data array. Once I have both arrays, I roll through the email array, then for each email I check any matching IPs. During the process loop for a distinct emails, the message is built, sent, then all ips are rolled through again inserting each IP into `failed_logins`.`notifications`.
After execution:
133 notifications sent (today)
Total Ips: 4,983
Total Sent Notifications: 2,772
Looking at the delivery counts, some of the abuse emails had 50+ ips in the message body. This time around sending notifications went very well compared to the last tests. The new IP is not spam listed already so thats a better start but not delivery 50+ messages at same time is a bigger improvement.
At this point in the fail2notify project I have a pretty constant system to generate data, process data, show data in the application, and send notifications. The next parts of this project will be making the shell commands run automatically, creating a more public method to send/generate data (API) from sources without mysql credentials, using the current data to start blocking IP ranges, and working on how to bundle it all for distribution.
To quickly start blocking IP Addresses, I build a text file of ips:
SELECT ip FROM fail2notify INTO OUTFILE '/var/lib/mysql-files/blocked.ips'
Then the following shell script:
#!/bin/bash # Simple iptables IP/subnet block script # ------------------------------------------------------------------------- # Copyright (c) 2004 nixCraft project <http://www.cyberciti.biz/fb/> # This script is licensed under GNU GPL version 2.0 or above # ------------------------------------------------------------------------- # This script is part of nixCraft shell script collection (NSSC) # Visit http://bash.cyberciti.biz/ for more information. # ---------------------------------------------------------------------- IPT=/sbin/iptables SPAMLIST="spamlist" SPAMDROPMSG="FAIL2NOTIFY DROP" BADIPS=$(egrep -v -E "^#|^$" /var/lib/mysql-files/blocked.ips) # create a new iptables list $IPT -N $SPAMLIST for ipblock in $BADIPS do $IPT -A $SPAMLIST -s $ipblock -j LOG --log-prefix "$SPAMDROPMSG" $IPT -A $SPAMLIST -s $ipblock -j DROP done $IPT -I INPUT -j $SPAMLIST $IPT -I OUTPUT -j $SPAMLIST $IPT -I FORWARD -j $SPAMLIST
Executing the script takes a good bit of time, but when I am done I run this command to list ips blocked:
iptables -L -n –line
I then scroll through the list, find an IP: 60.185.138.168. I go to fail2notify.com and search IP:
http://fail2notify.com/ip/60.185.138.168/
Viola, I then quickly login to the first server(over 10,000 failed attempts since last login), fetch a copy of block list, create shell script and execute. Tomorrow I would expect the number of failed login attempts to drop significantly.
Day 10: 05/01/2017
Today is a log processing day so the first thing I do is login to the 3 servers. On server #1: I can finally see some light at the end of the tunnel, only 2500 failed logins since 4/27/2017:
Stevens-MacBook-Pro:~ steven$ ssh root@mboxmp3.com Last failed login: Mon May 1 08:05:44 EDT 2017 from 58.218.198.149 on ssh:notty There were 2505 failed login attempts since the last successful login. Last login: Thu Apr 27 08:50:54 2017 from 67-8-248-179.res.bhn.net
next I get the logrotate dates for all 3 servers:
-
secure-20170430
-
secure-20170501
-
secure-20170430
Then I run my commands using correct dates and real password:
-
awk -v HOSTNAME=$(hostname -i) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170430 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h fail2notify.com failed_logins < ~/automate.sql
-
awk -v HOSTNAME=$(hostname -i) '/Failed/ {x[$(NF-3)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME }}' /var/log/secure-20170501 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h fail2notify.com failed_logins < ~/automate.sql
-
awk -v HOSTNAME=$(hostname -i) '/Invalid user/ {x[$(NF-0)]++} END {for (i in x){printf "%d\t%s\t%s\n", x[i], i, HOSTNAME}}' /var/log/secure-20170430 | sort -nr > ~/ips.data && /usr/bin/mysql -u logger -p'password' -h fail2notify.com failed_logins < ~/automate.sql
On the 3rd server I again ran into some issues with the command. The issue turned out to be hostname -I (returns all ip addresses – 2 in this case) and had to use hostname -i (one ip address) . After running in the new automate data, I run makeData, getIpData (780 total), getIpWhois(829 total), sendNotifications(60 notifications sent for a total of 1065 ips) processes.
Next I added an About link and modal to the footer.
We now have over 5,700 IPs reported:
Next up, I finish the commands to create the new block file:
/usr/bin/rm -rf /var/lib/mysql-files/blocked.ips && /usr/bin/mysql -u user -p failed_logins < /home/fail2notify/CronJobs/makeBlockedIps.sql && /usr/bin/cp /var/lib/mysql-files/blocked.ips /home/fail2notify/public_html/
Day 11: 05/22/2017
Today I ran the commands to import the last 3 weeks of data. I then executed all 3 cron processes and updated the website. Everything worked very smooth and I now have a total of ~6,800 blocked Ips
Day 12: 09/15/2017
Today I ran the commands to import data from 4 different servers. It was very easy to run commands on 2 new servers that I never had processed before. At the end of the sitting: 10,024 entries.
Things I still need to do:
- Run commands on each server for Failed and Invalid
- Need to write in an SQL BACKUP script for weekly backup (before automation).
- Need a way to add NEW ip addresses to iptables, without trying to re-block all the ips that are already blocked from previous executions
- Need a way to execute the log data automation, after logrotate executes, as well as method to determine the correct filename to process
- Need to automate execution of makedata, getIpData, and getIpWhois
- Setup Git Hub Repository
- Work on a way to block larger IP ranges