For the last few months we have been tracking server level compromises that have been utilizing malicious Apache modules (Darkleech) to inject malware into websites. Some of our previous coverage is available here and here.
However, during the last few months we started to see a change on how the injections were being done. On cPanel-based servers, instead of adding modules or modifying the Apache configuration, the attackers started to replace the Apache binary (httpd) with a malicious one. This new backdoor is very sophisticated and we worked with our friends from ESET to provide this report on what we are seeing.
Detection
In our previous posts, we recommended the utilization of tools like “rpm -Va” or “rpm -qf” or “dpkg -S” to see if the Apache modules were modified. However, those techniques won’t work against this backdoor. Since cPanel installs Apache inside /usr/local/apache and does not utilize the package managers, there is no single and simple command to detect if the Apache binary was modified.
They also keep the same timestamp on the binary, so you can’t see by the date of the file. A good and reliable way to identify the modified binary is by searching for “open_tty” on the httpd directory:
# grep -r open_tty /usr/local/apache/
If it finds open_tty in your Apache binary, it is likely compromised, since the original Apache binary does not contain a call to open_tty. Another interesting point is that if you try to just replace the bad binary with a good one, you will be denied, because they set the file attribute to immutable. So you have to run chattr -ai before replacing it:
# chattr -ai /usr/local/apache/bin/httpd
Injections
The compromised binary doesn’t change anything in the site in terms of utilization or how the sites looks, however on some random requests (once per day per IP address) instead of just displaying the content, it also adds a malicious redirect. That causes the browser to load content from what seems to be random domains:
http://893111632ce77ff9.aliz.co.kr/index.php (62.212.130.115)
http://894651446c103f0e.after1201.com (62.212.130.115)
http://328aaaf8978cc492.ajintechno.co.kr (62.212.130.115)
http://23024b407634252a.ajaxstudy.net (62.212.130.115)
http://cdb9156b281f7b01.ajuelec.co.kr (62.212.130.115)
http://894651446c103f0e.after1201.com (62.212.130.115)
..
And many others like that. So if a browser requests a javascript file, it would return a 302 (redirect) pointing to:
Location: http://dcb84fc82e1f7b01.alarm-gsm.be/index.php?j=originalfilebase64
Where “originalfilebase64” is a base64 encoded string of the URL that was requested. That allows the attackers to return the malware along with the original content. Once the malware is loaded it will redirect the site to spammy sites (most often porn pages). At the sites we analyzed, they were being pushed to httx://amazingtubesites.org (seems offline now). On some cases we also saw the redirection going to the Blackhole Exploit kit.
Note that those URL’s change very often and the ESET team has identified more than 30,000 variations of them.
The Backdoor
Our friends from ESET (Marc-Etienne, Olivier Bilodeau and Pierre-Marc Bureau) also analyzed the binary and discovered a nasty hidden backdoor. You have to read the full article here to get more details, but here is a brief explanation quoted from there:
Linux/Cdorked.A is one of the most sophisticated Apache backdoor we have seen so far. Although we are still processing the data, our Livegrid system reports hundreds of compromised servers and thousands of potential victims. The backdoor leaves no traces on the hard drive of compromised hosts other than its modified httpd binary. All the information related to the backdoor is stored in shared memory, the configuration is pushed by the attacker through obfuscated HTTP requests that aren’t logged in normal Apache logs. This means that no command and control information is stored anywhere on the system.
..
The HTTP server is equipped with a reverse connect backdoor that can be triggered via a special HTTP GET request. It is invoked when a request to a special path is done with a query string in a particular format, containing the hostname and port to connect. The client IP of the HTTP dialog is used as a key to decrypt the query string as a 4 byte XOR key. Additionally, IP specified in X-Real-IP or X-Forwarded-For headers will override the client IP as the XOR key. This means we can craft a X-Real-IP header that will in effect be a “x00x00x00x00” key…
As you can see, the attackers don’t need any files to act as a backdoor and just use the Apache binary for it.
The Random URLS
One thing that striked us a very suspicious is that most of random domains being used as the first level redirection are coming from legitimate sites with their DNS hosted at dothost.co.kr:
ajaxstudy.net name server ns1.dothost.co.kr.
ajaxstudy.net name server ns2.dothost.co.kr.
We are still unsure if those are compromised accounts or if the attackers got some type of access to their DNS to inject random sub domains to domains hosted there. We are stil tracking how those URL’s changed, so we will have to post more details later.
Final thoughts
When attackers get full root access to the server, they can do anything they want. From modifying configurations, to injecting modules and replacing binaries. However, their tactics are changing to make it even harder for admins to detect their presence and recover from the compromise.
We also don’t have enough information to pinpoint how those servers are initially being hacked, but we are thinking through SSHD-based brute force attacks.
We will keep monitoring these attacks and we will provide more information as we get them.
18 comments
: Thanks for this article. I am a new blogger so this is very helpful. It’s hard to know how long it takes to make a successful blog, so “being patient” is among some of the best advice you can give. It’s easy to get discouraged when you don’t see movement, but this give me some encouragement.
Wth does this have to do with the article?
What about the technology and the need to develop these things always. I will wait for more new information.
Thanks for this article. Wait new information.
Anyone find a way to fix this yet?
We reap what we sow. For years datacentres and hosts have done very little, if anything at all to detect dubious websites within their system.
If they actually took some form of action, then the sources for these malicious files would be swiftly removed.
The problem is that when no action is taken, the attacks continue on and on and on. When I check my logs, I see so many attempted attacks coming from hosts as a pose to ISPs.
I emailed 50 hosts who were based in the USA or western Europe about attacks that took place, including all logs they would need. Not one of them replied or took any action.
Its the same issue with Aeroplane manufacturers. It takes a number of people to die before they spend money on the known issue.
What can we do? The sources of the files are usually compromised sites and the hosts nor datacentres will do anything. There is no one else to report this to.
Essentially, unless something is done, these attacks will continue and get worse and worse.
Someone needs to take action, but that causes a second issue which is that no one will step up unless the governments mandate it, which is not really the best direction to take. At the end of the day, hosts need security teams who randomly check connections and have a reporting feature so that issues can be reported to them in a standard format, allowing them to investigate the issue and remove it.
At the moment, sites with 50,000 to 100,000 monthly visitors will see 31% malicious traffic. Sites with 2,500 monthly visitors and below will see 49% of malicious traffic. Source: http://news.cnet.com/8301-1009_3-57433611-83/bots-dominate-small-web-site-traffic-research-shows/
When do we say enough is enough? When will something be done?
I’ve also e-mailed my fair share of ISP’s. I’ve had several sites shut down, but more often than not, it’s just ignored (Good luck e-mailing Chinese / Russian ISP’s….)
Surely this is the core problem. If hosts could have their domains taken from them by RIPE or whoever is responsible for assigning the numbers, then they would have to act on reports.
One of the biggest problems I’ve seen these days is domain lookups on scammy domains simply returns “Domain info protected by whoisguard.com” with a domain-unique hash. Whoisguard.com never reply to e-mails, so you hit a dead-end.
Granted, the battle rages on – I recently got the page:
http://www.moineybookers.com/app/login.php (Phishing page for moneybookers.com)
Blacklisted by Opera, Chrome, and Firefox, and then the base domain itself deleted.
The battle is not lost 😛
Agreed. However, emailing the owner (such as via whoisguard) is usually fruitless because if the site is compromised then it is likely they are not on top of it or do not know what they are doing.
I rarely, if ever, contact the owner as there is little point. I did do this recently with one site as it was a charity hospital. They took action immediately and were very grateful.
The real issue is the hosts as they are enabling it to happen through their network. If all malicious traffic were stopped today, we would all see a 40-60% reduction in bandwidth usage and could all downgrade to a smaller plan (those without dedicated servers) which would lose the host money. I dont know if this is why they dont do anything, but it is certainly a good reason. Why would they work to lose money?
I had wanted to design some software for our server firewall so that it compiled emails to the system admins that had been caught attacking us. I could then check them before they were sent to make sure that all data has been correctly compiled by the automated system. Then if they dont reply, it could submit the IP and the host to blacklists with details of the email and attack.
That would solve at least one issue which is the volume of time it takes.
“so that it compiled emails to the system admins that had been caught attacking us.”
The amount of times the e-mail addresses for the sys-admins return gateway errors is rather annoying…
I’ve also found that mailing most chinese e-mail addresses from GMail is futile, since it seems they’ve blocked *@gmail.* (If they didn’t, they simply don’t respond)
The battle would be easily won if ISP’s actually cared…
ISPs or hosts?
The email could be sent from an account on the server so it was from a domain and not gmail or other. If it is returned, it can then email the registrar as I think it is against the icann terms to use an email that does not work. Then the domain is cancelled and the host doesnt have to do anything.
If the hosts email address is being returned, they too could lose their domain as it is against the icann terms.
As we have both said, the issue at the moment is that the hosts usually do nothing so other hosts do not report issues to them. That is an unproductive way of doing it and has resulted in zombie botnets and all the other attacks we server owners see every day. This has been compounded by hosts not providing good enough security tools as part of their default installation and server owners not researching.
I recently dealt with a company on behalf of a client and they told me “we dont have any security or any anti virus as they only infect windows systems, not linux”. That was the end of our relationship with them. Clearly they do not know enough and I found their system had not been updated once since the day it was installed over 3 years ago!
The most annoying part is that, with a bit of money and some pull, you could most likely rather easily find the individual involved in most cases. The simple fact is that people don’t care.
Remember the PSN hack awhile ago (DB passwords in plain-text – But that’s for another day) – Track the IP’s, get the FBI to seize the PC (Assuming it was proxied), get the logs from the ISP (They fold under official pressure), find the IP of the person using the box at that time (Network traffic from ISP), contact THAT ISP (And so on, and so forth), fly over to the guy, and knock on his front door – It’s not exactly rocket science. The simple fact is that those who CAN do something about it – Won’t, since it’s too much of a hassle…
And the NSA can track everyone. You would have thought the FBI could have just gone to the NSA to get all this data.
Maybe the solution is to create a group of hosts who all sign an agreement and block other hosts who do not conform. That fixes the attacks from hosts, but you still have zombie botnets. It is indeed a huge issue.
My site has been so. I have not found a solution
Comments are closed.