[arch-general] HTTP spam from China
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
On 26.02.19 13:40, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
Did you take a look at fail2ban? https://wiki.archlinux.org/index.php/Fail2ban Kind Regards Bjoern
On 26/02/2019 14:55, Bjoern Franke via arch-general wrote:
On 26.02.19 13:40, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
Did you take a look at fail2ban?
https://wiki.archlinux.org/index.php/Fail2ban
Kind Regards Bjoern Ooh. I'm going to have to take a look at this. I'll still keep china blocked since it's a personal file drop and I don't want my bandwidth eaten up by malicious connections, but this seems really useful. From a quick google search this seems to be a fix for the vulnerability scans, but just in case they find a vulnerable file on the first try, I'll keep China blocked. There's really no use for me to unblock it since I doubt I'll be going to China to try and use my file drop.
On Tue, 26 Feb 2019 12:40:17 +0000, Juha Kankare via arch-general wrote:
I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
A few years ago I experienced such an issue with mails from another nation. I received a few hundred spam mails within a second, several times a day, for one or more than one month, I don't remember. I guess the reason that this happens is simply bad luck. I still have a special folder for mails from this nation, but actually I never was victim of such a spam bombing raid again. I blocked almost all mails from this nation, too and just allowed some mails to move to a special folder aimed for probably wanted mails from this nation.
On 2019/2/26 下午8:40, Juha Kankare via arch-general wrote:
I'm curious as to why this happens, and if anyone else has had the same problem.
Because your IP might have joined the Chinese firewall poison party: https://news.ycombinator.com/item?id=8931827 -- Regards, Felix Yan
On 26/02/2019 15:10, Felix Yan via arch-general wrote:
On 2019/2/26 下午8:40, Juha Kankare via arch-general wrote:
I'm curious as to why this happens, and if anyone else has had the same problem. Because your IP might have joined the Chinese firewall poison party: https://news.ycombinator.com/item?id=8931827
Why point this to me? Seems pretty stupid. Anyways, I'm currently dropping all the connections and the errors stopped so I'm fine now.
On 26/02/2019 14:40, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
Anyways, I fixed my problem via a combination of fail2ban and this script: https://bbs.archlinux.org/viewtopic.php?id=244527, so it isn't a problem anymore
I can confirm that my logs show the same thing. It's been happening for a short while now. Thank you very much for sharing your script. I have just been letting fail2ban deal with it but this is a better remedy I think. Thank you again! -E Thank you. Warmest Regards, Eric ________________________________ From: arch-general <arch-general-bounces@archlinux.org> on behalf of Juha Kankare via arch-general <arch-general@archlinux.org> Sent: Tuesday, February 26, 2019 10:06:47 AM To: arch-general@archlinux.org Cc: Juha Kankare Subject: Re: [arch-general] HTTP spam from China On 26/02/2019 14:40, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
Anyways, I fixed my problem via a combination of fail2ban and this script: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbbs.archli..., so it isn't a problem anymore
On 2/26/19 1:40 PM, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
I see this happen on my SSH server. The journal is full of these failed login attempts. Haven't checked from where those login atttempts come from though. It makes it hard to find something in the journal. Regards, Harm-Jan Zwinderman
On Tue, Feb 26, 2019 at 4:02 PM Zorro via arch-general <arch-general@archlinux.org> wrote:
I see this happen on my SSH server.
The journal is full of these failed login attempts. Haven't checked from where those login atttempts come from though.
It makes it hard to find something in the journal.
It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to port 22
On 26/02/2019 18:05, Andy Pieters wrote:
On Tue, Feb 26, 2019 at 4:02 PM Zorro via arch-general <arch-general@archlinux.org> wrote:
I see this happen on my SSH server.
The journal is full of these failed login attempts. Haven't checked from where those login atttempts come from though.
It makes it hard to find something in the journal.
It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to port 22
Same. For easy ports to remember, I like to combine powers of two (e.g. 25664 (256-64) or 25632). Easy to remember and non-standard. -- Juha Kankare
Hi Juha,
It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to port 22
Same. For easy ports to remember, I like to combine powers of two (e.g. 25664 (256-64) or 25632). Easy to remember and non-standard.
I go for $RANDOM that's five digits and a valid port number. To avoid remembering it, I add it to ~/.ssh/config, e.g. `ssh foo' with Host foo Hostname foo.bar.xyzzy.com Port 16747 -- Cheers, Ralph.
On 26/02/2019 18:40, Ralph Corderoy wrote:
Hi Juha,
It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to port 22 Same. For easy ports to remember, I like to combine powers of two (e.g. 25664 (256-64) or 25632). Easy to remember and non-standard. I go for $RANDOM that's five digits and a valid port number. To avoid remembering it, I add it to ~/.ssh/config, e.g. `ssh foo' with
Host foo Hostname foo.bar.xyzzy.com Port 16747
Well, I have to access my server from multiple computers, including some where I can't use personal accounts and/or don't feel safe saving the location and port of my server to. I do have ~/.ssh/config set up on all my personal devices. -- Juha Kankare
@Juha why not feel comfortable sharing the location of the server? On Tue, Feb 26, 2019 at 9:44 AM Juha Kankare via arch-general < arch-general@archlinux.org> wrote:
On 26/02/2019 18:40, Ralph Corderoy wrote:
Hi Juha,
It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to port 22 Same. For easy ports to remember, I like to combine powers of two (e.g. 25664 (256-64) or 25632). Easy to remember and non-standard. I go for $RANDOM that's five digits and a valid port number. To avoid remembering it, I add it to ~/.ssh/config, e.g. `ssh foo' with
Host foo Hostname foo.bar.xyzzy.com Port 16747
Well, I have to access my server from multiple computers, including some where I can't use personal accounts and/or don't feel safe saving the location and port of my server to. I do have ~/.ssh/config set up on all my personal devices.
-- Juha Kankare
On 2/26/19 5:05 PM, Andy Pieters wrote:
I see this happen on my SSH server.
The journal is full of these failed login attempts. Haven't checked from where those login atttempts come from though.
It makes it hard to find something in the journal. It's why I keep my SSH servers on a non-standard port. I know it doesn't prevent someone from discovering it, but it cuts out 99.99% of all those attacks, being able to filter out connection attempts to
On Tue, Feb 26, 2019 at 4:02 PM Zorro via arch-general <arch-general@archlinux.org> wrote: port 22
I already have done that but the one(s) behind all these break in attempts did discover the port. Regards, Harm-Jan Zwinderman
On 26/02/2019 18:02, Zorro via arch-general wrote:
On 2/26/19 1:40 PM, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
I see this happen on my SSH server.
The journal is full of these failed login attempts. Haven't checked from where those login atttempts come from though.
It makes it hard to find something in the journal.
Regards, Harm-Jan Zwinderman
If you want a fix, check out https://bbs.archlinux.org/viewtopic.php?pid=1833825#p1833825 and/or fail2ban -- Juha Kankare
Just an FYI if you pull cidr blocks by country, either doing it yourself directly from arin et al or by using someone elses list like ipdeny.com the CIDR blocks are not necessarily compacted. i.e. it is often not the most minimal CIDR representation. I use is this little python script, which works on list of CIDR blocks of IPV4 or IPV6, to compact the list of cidr blocks. I feed the output compacted CIDR blocks to the firewall ipset script. In case anyone finds this useful here is my CidrMerge.py : UseageL ----- cut here ----- #!/usr/bin/python # # Read from stdin a list of cidr blocks and compacts them if possible # Resulting compacted CIDR blocks are written to stdout. # Works on any file with IPV4 or IPV6 cidr blocks. # # Usage : CidrMerge.py < file # # Gene C. # # 20180503 # import sys import netaddr def main(): num_args = len(sys.argv) # # Open file - read one line at a time and output # lines=sys.stdin.readlines() if len(lines) == 1: lines = lines[0].split() # # create merged set of entire input lines # set1 = netaddr.IPSet(lines) # # Write them out # for cidr in set1.iter_cidrs() : print (cidr) return # ----------------------------------------------------- if __name__ == '__main__': main() # # -------------------- All Done ------------------------
On 26/02/2019 20:11, Genes Lists via arch-general wrote:
Just an FYI if you pull cidr blocks by country, either doing it yourself directly from arin et al or by using someone elses list like ipdeny.com the CIDR blocks are not necessarily compacted.
i.e. it is often not the most minimal CIDR representation. I use is this little python script, which works on list of CIDR blocks of IPV4 or IPV6, to compact the list of cidr blocks. I feed the output compacted CIDR blocks to the firewall ipset script.
In case anyone finds this useful here is my CidrMerge.py :
UseageL
----- cut here ----- #!/usr/bin/python # # Read from stdin a list of cidr blocks and compacts them if possible # Resulting compacted CIDR blocks are written to stdout. # Works on any file with IPV4 or IPV6 cidr blocks. # # Usage : CidrMerge.py < file # # Gene C. # # 20180503 #
import sys import netaddr
def main(): num_args = len(sys.argv)
# # Open file - read one line at a time and output #
lines=sys.stdin.readlines() if len(lines) == 1: lines = lines[0].split()
# # create merged set of entire input lines # set1 = netaddr.IPSet(lines)
# # Write them out # for cidr in set1.iter_cidrs() : print (cidr)
return
# ----------------------------------------------------- if __name__ == '__main__': main()
# # -------------------- All Done ------------------------
My current script is just pulling cn.zone from ipdeny.com. This looks super useful, I'm saving it. Thank you dude! -- Regards, Juha Kankare
On 2/26/19 1:13 PM, Juha Kankare via arch-general wrote:
On 26/02/2019 20:11, Genes Lists via arch-general wrote: ...
My current script is just pulling cn.zone from ipdeny.com. This looks super useful, I'm saving it. Thank you dude!
You're welcome. I just ran it on cn.zone and it reduces the number of lines from 8,337 to 5,120. It can make a significant difference. best, gene
On 2/26/19 1:20 PM, Genes Lists via arch-general wrote:
On 2/26/19 1:13 PM, Juha Kankare via arch-general wrote:
On 26/02/2019 20:11, Genes Lists via arch-general wrote: ...
My current script is just pulling cn.zone from ipdeny.com. This looks super useful, I'm saving it. Thank you dude!
You're welcome.
I just ran it on cn.zone and it reduces the number of lines from 8,337 to 5,120. It can make a significant difference.
best,
gene
Just to +1 what Gene has said, I've taken similar approaches to compacting into CIDRs and it really does make a significant difference. For clarification on his ipset[0] point, I also have to strongly recommend it. It not only *greatly* simplifies your ruleset, but it can be dynamically altered without needing to reload your firewall rules. e.g. assuming you have an IP set named "china_ips", -A INPUT -m set --match-set china_ips src -p tcp -m tcp --dport 80 -j DROP will drop traffic for all those entries. You've then simplified many (MANY) rules to one. :) You can (Gene, you may find this particularly useful since you feed to ipset) use the pyroute2.IPSet() function to actually manage the live kernel's ipsets as well. Make sure your running kernel and latest installed kernel match, otherwise you'll need to reboot so the ipset kernel module can be loaded. Untested, but should be pretty darn close if not functional: ##### import subprocess import pyroute2 # (...) ipset = pyroute2.IPSet() setsfile = '/etc/ipset.conf' setname = 'china_ips' tmpset = '{0}_TMP'.format(setname) set_exists = False try: # Check to see if the list exists. ipset.headers(setname) # list is done here as a quick-and-dirty sanity/exception check, # which is why it's in both the try and exception. setlist = ipset.list(name = setname) set_exists = True except pyroute2.ipset._IPSetError: ipset.create(setname, stype = 'hash:net') setlist = ipset.list(name = setname) # We use a temporary set so we don't affect any current iptables # processing. Most likely unnecessary, but better safe than sorry. try: ipset.destroy(name = tmpset) except pyroute2.ipset._IPSetError: # It doesn't exist (yet), which is what we want. pass # Create the temporary set ipset.create(tmpset, stype = 'hash:net') for n in set1.iter_cidrs(): # "set1" is from Gene's script ipset.add(tmpset, n) # Make the temporary set live ipset.swap(setname, tmpset) # And cleanup the now-unnecessary tmpset ipset.destroy(name = tmpset) # Save them to the persistent file so it's applied on a reboot. # Remember to "systemctl enable ipset.service". # Unfortunately, there isn't a built-in save function. # You could easily write your own iterator/generator, though, # if you want to avoid a subprocess call. # The syntax is pretty simple. with open(setsfile, 'w') as f: ipset_cfg = subprocess.run(['/usr/bin/ipset', 'save'], stdout = f) # DONE. ##### [0] https://wiki.archlinux.org/index.php/Ipset -- brent saner https://square-r00t.net/ GPG info: https://square-r00t.net/gpg-info
On 26/02/2019 23:01, brent s. wrote:
On 2/26/19 1:20 PM, Genes Lists via arch-general wrote:
On 2/26/19 1:13 PM, Juha Kankare via arch-general wrote:
On 26/02/2019 20:11, Genes Lists via arch-general wrote: ... My current script is just pulling cn.zone from ipdeny.com. This looks super useful, I'm saving it. Thank you dude!
You're welcome.
I just ran it on cn.zone and it reduces the number of lines from 8,337 to 5,120. It can make a significant difference.
best,
gene
Just to +1 what Gene has said, I've taken similar approaches to compacting into CIDRs and it really does make a significant difference.
For clarification on his ipset[0] point, I also have to strongly recommend it. It not only *greatly* simplifies your ruleset, but it can be dynamically altered without needing to reload your firewall rules.
e.g. assuming you have an IP set named "china_ips",
-A INPUT -m set --match-set china_ips src -p tcp -m tcp --dport 80 -j DROP
will drop traffic for all those entries. You've then simplified many (MANY) rules to one. :)
You can (Gene, you may find this particularly useful since you feed to ipset) use the pyroute2.IPSet() function to actually manage the live kernel's ipsets as well. Make sure your running kernel and latest installed kernel match, otherwise you'll need to reboot so the ipset kernel module can be loaded.
Untested, but should be pretty darn close if not functional:
##### import subprocess import pyroute2
# (...) ipset = pyroute2.IPSet() setsfile = '/etc/ipset.conf' setname = 'china_ips' tmpset = '{0}_TMP'.format(setname)
set_exists = False try: # Check to see if the list exists. ipset.headers(setname) # list is done here as a quick-and-dirty sanity/exception check, # which is why it's in both the try and exception. setlist = ipset.list(name = setname) set_exists = True except pyroute2.ipset._IPSetError: ipset.create(setname, stype = 'hash:net') setlist = ipset.list(name = setname)
# We use a temporary set so we don't affect any current iptables # processing. Most likely unnecessary, but better safe than sorry. try: ipset.destroy(name = tmpset) except pyroute2.ipset._IPSetError: # It doesn't exist (yet), which is what we want. pass
# Create the temporary set ipset.create(tmpset, stype = 'hash:net') for n in set1.iter_cidrs(): # "set1" is from Gene's script ipset.add(tmpset, n)
# Make the temporary set live ipset.swap(setname, tmpset)
# And cleanup the now-unnecessary tmpset ipset.destroy(name = tmpset)
# Save them to the persistent file so it's applied on a reboot. # Remember to "systemctl enable ipset.service". # Unfortunately, there isn't a built-in save function. # You could easily write your own iterator/generator, though, # if you want to avoid a subprocess call. # The syntax is pretty simple. with open(setsfile, 'w') as f: ipset_cfg = subprocess.run(['/usr/bin/ipset', 'save'], stdout = f) # DONE. #####
Yes, I wrote a portable shellscript to do ipset already. It literally just blocks china, should run on research unix from the 70's as long as it had ipset, iptables, and wget, and in general it has beautiful looking output with info in case the user really chooses to do some unrecommended things. You too, should check it out: https://bbs.archlinux.org/viewtopic.php?pid=1833895#p1833895, it's pretty neat. -- Regards, Juha Kankare
On 2/26/19 4:01 PM, brent s. wrote: ...
You can (Gene, you may find this particularly useful since you feed to ipset) use the pyroute2.IPSet() function to actually manage the live
Great thank you - I wasn't aware of this capability. I really like python! ipset made a huge difference - major benefit I agree. The other thing I do in my firewall script is I write the rules in iptables-save format. Many guides continue to use the iptables executable in their examples rather than directly writing into a file in iptables-save format. I haven't read any guides for a long time, so perhaps there are better ones now which speak to this. Rather than invoking iptables repeatedly on each rule, i write an iptables-save formatted file and then use iptables-restore to install the entire firewall in one shot. thank you brent ... gene
On 26/02/2019 23:25, Genes Lists via arch-general wrote:
On 2/26/19 4:01 PM, brent s. wrote:
...
You can (Gene, you may find this particularly useful since you feed to ipset) use the pyroute2.IPSet() function to actually manage the live
Great thank you - I wasn't aware of this capability. I really like python! ipset made a huge difference - major benefit I agree.
The other thing I do in my firewall script is I write the rules in iptables-save format. Many guides continue to use the iptables executable in their examples rather than directly writing into a file in iptables-save format. I haven't read any guides for a long time, so perhaps there are better ones now which speak to this.
Rather than invoking iptables repeatedly on each rule, i write an iptables-save formatted file and then use iptables-restore to install the entire firewall in one shot.
thank you brent ...
gene
I feel like it's easier to just let the command do the formatting. On top of that, doing the same for ipset requires like, a lot of extra lines and formatting for something very simple. Simply iterating through the ip's with the ipset executable makes creating the lists that much easier. -- Regards, Juha Kankare
On Tue, Feb 26, 2019 at 04:25:37PM -0500, Genes Lists via arch-general wrote:
On 2/26/19 4:01 PM, brent s. wrote:
...
You can (Gene, you may find this particularly useful since you feed to ipset) use the pyroute2.IPSet() function to actually manage the live
Great thank you - I wasn't aware of this capability. I really like python! ipset made a huge difference - major benefit I agree.
The aur/iprange package is another alternative for manipulating IP lists. It can optimize/merge/compare/convert in pretty much any way you like. Written in C, source is here: https://github.com/firehol/iprange DcUK
On 2/28/19 9:21 AM, DcUK wrote: ..
The aur/iprange package is another alternative for manipulating IP lists.
It can optimize/merge/compare/convert in pretty much any way you like.
Thanks - wasn't aware of this one either. A quick glance at source and it seems to be IPV4 only with no IPV6 support. I may have missed it I only glanced at the C code briefly. The python library does have the advantage of handling IPV6 as well. gene
On 02/26/2019 06:40 AM, Juha Kankare via arch-general wrote:
I'm getting a lot of connections from China it seems. Whenever I check my journalctl, it's an andless wall of nginx complaints about a single ip spamming requests fro different php files. This happens with hundreds of ip's, and tens of times daily. Has anyone else been hit by this. I already made a shellscript to block all connections from China, but I'm curious as to why this happens, and if anyone else has had the same problem.
I take the sledge-hammer approach and simply block the entire APNIC and AFRINIC IP blocks and a good portion of RIPE with ip-tables. Dramatically reduces the amount of mischief coming from the internet. Then whitelist specific IPs if needed for some individual package. Not optimal, but very, very effective. Top 2 offenders are RIPE, China ranks number 3 and India provides an impressive number 4 from 45.112.0.0/12 alone. My Top-20 Offenders are: Chain INPUT pkts bytes Source 1 99639 5901K 185.0.0.0/8 2 27859 1671K 141.0.0.0/8 3 14529 792K 220.0.0.0/8 4 14188 1061K 45.112.0.0/12 5 12852 766K 213.0.0.0/8 6 11428 680K 89.0.0.0/8 7 9340 636K 193.0.0.0/8 8 9215 542K 46.0.0.0/8 9 8685 479K 91.0.0.0/8 10 8134 484K 180.0.0.0/8 11 7929 470K 93.0.0.0/8 12 7363 428K 5.0.0.0/8 13 7059 419K 109.0.0.0/8 14 5686 328K 202.0.0.0/8 15 5030 298K 85.0.0.0/8 16 4194 240K 195.0.0.0/8 17 4190 245K 178.0.0.0/8 18 4125 238K 188.0.0.0/8 19 4111 243K 77.0.0.0/8 20 3818 225K 80.0.0.0/8 -- David C. Rankin, J.D.,P.E.
participants (13)
-
Andy Pieters
-
Bjoern Franke
-
brent s.
-
Caleb Allen
-
David C. Rankin
-
DcUK
-
Eric Brown
-
Felix Yan
-
Genes Lists
-
Juha Kankare
-
Ralf Mardorf
-
Ralph Corderoy
-
Zorro