The time has arrived!  Malware Monday, as some have labeled it.  The FBI has shut off the DNS servers it maintained to allow those infected with the malware to continue to operate and provide additional time for cleanup.  The malware re-directed web site requests to sites where the its authors could make money off of advertising — so called “click hijacking.”  And make money they did — supposedly about $14 million USD. (more…)


Networks, like the enterprises they support, evolve over time.  It is extremely rare that one has the opportunity to re-evaluate the underlying assumptions behind a logical network design and the IP address schema, and with the advantage of hindsight, make course corrections that can provide flexibility and accommodate the security controls needed now and into the future.  Such an opportunity may only come along once a decade or more.  Most corporate enterprises did not connect to the Internet until the late 1990’s or early 2000’s, and their experience with TCP/IP was limited, but many are still living with the choices made long ago.  If you could re-design your enterprise network IP address space today, what would you change?  The example that follows provides one such way for a large private network.  Of course you have to have a driver for undertaking such a project and the creation of security zones is a good one! (more…)

Establishing isolated security zones within an enterprise network is an effective strategy for reducing many types of risk, and this is especially obvious when one considers how permeable networks are today.  The old perimeter defense model is no longer sufficient.  Some would argue it is no longer necessary — that de-perimeterization is inevitable, we should prepare for a future of blended networks without clear boundaries and security should be moved inward.  Ultimately, all security is about protecting a valuable asset – data – but that protection involves a defense-in-depth strategy that includes all layers. (more…)

Okay, I have to admit I was concerned.  We installed Windows Server 2008 R2, setup our applications, and then proceeded to add the required configuration to have Nagios monitor the host and associated services.  But Nagios claimed the host was down!  Simple pings returned a response.  Now what?


I configure Nagios to use check_tping to monitor hosts, not the standard ICMP ping.  Why, you ask?  Some network devices do not handle ICMP on their fast-path in silicon, leaving it to be processed by the CPU.  During periods when the CPU is busy, ICMP will not be a good measure of host or network responsiveness.  In some networks, ICMP may be handled with a different QoS profile.  The best gauge of response, over a network, is something close to what applications use.  Guess what?  The Transmission Control Protocol!  How is it leveraged for monitoring?  A SYN packet is directly at the host on a port that is closed (it could be an open port — more on that later).  A host operating system will typically respond with a TCP reset (RST ACK to be exact) — a simple two-packet exchange without extra overhead — this is a kind of network equivalent to “take pictures, leave [very few] footprints!”  If there is an intervening firewall device, the chosen port will have to be opened to allow the SYN packets to reach the destination host. (more…)

For about as long as I can remember, every serious DNS administrator has always advocated the use of dig (Domain Information Groper) over nslookup.  There’s no need for me to rehash all of the arguments — I’ll just say that dig returns information in a manner consistent with what a protocol analyzer might provide.  That’s great, but isn’t it only for Un*x systems?  I need to be able to debug from a Windows system. (more…)

DNS has been a source of concern for security practitioners for years.  I think of the domain name system as the language center of the Internet brain.  Without it, we’re reduced to pantomime and smoke signals.  The major news of last year was the practical method of poisoning DNS caches of remote servers discovered by Dan Kaminsky. Most recently, we’ve seen news of a trojan that seems to be a variant of Trojan.Flush.M, first seen last December, that targets your entire LAN by leveraging another commonly-used protocol.

For some reason, DNS scares a lot of people — and the most frightening of all is probably zone delegation in the space on non-octet boundaries. That’s probably why most small organizations leave this entirely to an ISP, and then deal with some web-based tool for making changes to the zone, rather than operating their own master nameserver for these zones and leaving the ISP to provide secondary service.

RFC2317 outlines the technique thoroughly, based on a trick described by Glen Herrmannsfeldt on comp.protocols.tcp-ip.domain. Below is a brief description of how this works. Read the RFC for the complete details. (more…)

Next Page »