DNS has been a source of concern for security practitioners for years. I think of the domain name system as the language center of the Internet brain. Without it, we’re reduced to pantomime and smoke signals. The major news of last year was the practical method of poisoning DNS caches of remote servers discovered by Dan Kaminsky. Most recently, we’ve seen news of a trojan that seems to be a variant of Trojan.Flush.M, first seen last December, that targets your entire LAN by leveraging another commonly-used protocol.
To see what happened after Dan Kaminsky announced the means of exploiting the flaw, but didn’t provide details, see the following locations:
First came this speculation on the July 21, 2008:
Then came the Matasano “leak” on July 22nd, which was pulled, but captured by some:
…and finally the Metasploit code on July 23rd:
Metasploit pwned by their own exploit (via their ISP, AT&T) on July 29th:
The recently discovered trojan was described in an article in The Register which also provides links to information on the SANS site. Once the trojan is activated, it launches a rogue DHCP server on the host system and subsequently tricks victims on the host’s LAN into using external DNS servers. These DNS servers can be used to direct victims to phishing sites, etc. Johannes Ullrich, the CTO of SANS Internet Storm Center, correctly assesses the risk:
“This kind of malware is definitely dangerous because it affects systems that themselves are not vulnerable” to the trojan, Ullrich told The Register. “So all you need is one system infected in the network and it will affect a lot of other nonvulnerable systems.”
The aforementioned article provides three possible approaches to thwarting such a trojan:
- Hard-coding DNS servers
- Monitoring outbound DNS requests
- Blacklisting the bad DNS servers (blocking requests to them)
The first option is hardly practical in networks of more than a few nodes. The last option is ineffective — there is no guarantee that the trojan, or future variants will continue to use the same DNS servers. These leaves us with the second option. In my opinion, this is impractical as well, since it is labor intensive or requires the development of automation. Even with some automation, a decision will still ultimately have to be made by a human. So what’s left? Something quite practical and effective, at least for enterprise networks.
Best practice for DNS architectures would suggest that internal DNS servers forward all requests to DNS servers in a DMZ, and those servers communicate with the Internet. Additionally, simple firewall rules/packet filtering should drop all DNS requests which do not come from the approved internal DNS servers at both the internal and DMZ interfaces. This forces all internal clients to use the approved internal DNS servers. If a rogue DHCP server offers another set of DNS servers, any affected clients will just stop working, and this is a much better than not knowing that they have become the victims of malware elsewhere on the internal LAN.
Furthermore, if you use layer 3 switches with packet filtering capabilities, you can limit the effects of rogue DHCP servers by dropping UDP packets with a source port of 67 from all but your approved DHCP server(s). This will restrict the effectiveness of a rogue DHCP server to a single broadcast domain, which might be a floor of one building. This is good practice anyway. Without this, I’ve seen rogue DHCP servers take out hosts across several buildings in a campus environment. There is never a silver bullet in security — use all the counter-measures you can.
Oh, so why the caveat about enterprise networks? If you’re an ISP, for example, you’re likely to make customers unhappy if you require them to use your DNS servers. Some may try, but I’d guess they will face publicity challenges over privacy at the very least. DNS data has many uses, so be careful who you give yours to.