Wednesday, February 28, 2007

Continuation to Did you know series?

The code red worm is a computer worm that attacked computers running the Microsoft IIS web server. Within a week of its release (July 13th 2001) it had infected 3590000 users. It made use of the vulnerability in the indexing software distributed with IIS called as buffer overflow. This worm along with others like slammer, nimda etc are all fast scanning worms. The effectiveness of these worms can be argued to the fact that IPv4 addresses are only 32 bit long thus allowing exhaustive search. This can make one believe that adoption of the 128bit IPv6 addressing should stem the speed due to it sparse address spacing of these worms assuming that the number of internet users dont go up by similar factor. The work factor for finding a target in an IPv6 Internet will increase by approximately 2 to the power 96, rendering random scanning prohibitively expensive. This raises a good point of discussion if this can be a good solution.

4 comments:

Lakshminarayanan Subramanian said...

Actually, just because the address space is a little sparse, it does not mean that the effectiveness of the worm will be lesser. Do you agree or not?

Shobhit S Thapar said...

The effectiveness of the worm may not be lesser but in an IPv6 environment random scanning is going to be far more expensive. This was the strategy used by worms like slammer and witty. In an IPv6 environment the work factor of finding a target goes up by 2 to the power 96.

These worms thus which use random scanning could render as expensive. Thus worms will have to use other techniques like DNS random scanning where in a
worm that guesses DNS names instead of IP addresses, and uses the DNS infrastructure to locate likely targets. The speeds of such a worm can exhibit propagation speeds comparable to an IPv4 random-scanning worm.

Sherman said...

Are the addresses in IPv6 uniformly distributed?

The worm can also use an hybrid approach of 1st doing a DNS guess, then an IP guess. IPv6 addresses in a network / subnet are still contiguous...

Yuri said...

Sherman's question is great. How will ip's be assigned and heuristics could be designed to take advantages of poor distributions or known densities of addresses. For example, random scanning in a particular subnet or there is a known gap between addresses, once you find a hit, skip by gap addresses to scan for the next one. Just because there are more possibilities does not guarantee it will be implemented with security advantages in mind or that the implementation even if intended to be secure wont be able to be undermined.
Even if 'random', if we look at random algorithms we can sometimes find ways to attack these algorithms and demonstrate how in fact we can predict with a level of certainty the next output.