Some numbers and magnitudes

To produce a good design for any system, the “architect” needs to have a good understanding of the size of the problem and some references regarding orders of magnitude. This is a valid statement for many aspects of life that we take already for granted. Everybody (almost !!) understand the quatity of ingredients to do a cake, or how much petrol do the car need when it is close to empty. If in pre-paid mode, you go to the counter a say your estimation. This estimation is basically done by experience and some common sense. Buy imagine that someone asks you to say the petrol needed for a fleet of trucks or a fleet of ships !! In this case, our experience is not valid, and our common sense might lead to a very big deviation. Imagine that you are the one that will pay a penalty based on that deviation…

It is clear that some well proven numbers and orders of magnitude will help in any “new” system design. This is even more true when the design is for big large systems. Let’s see some of these below.

Internet addresses and connected devices…

Total of v4 IPs: 232 - 4.294.967.296 –> already exhausted !! Total of v6 IPs: 2128 - 340.282.366.920.938.463.463.374.607.431.768.211.456 –> 340 sextillions (340 + 36 zeros !!!)

IP6 addresses seem to be a lot (like it did IP4 not so long ago!!), but look at the growth of devices that need an address… Think on all laptops, mobiles and tablets. Add all smart TVs, consoles, IoT (cameras, fridges, etc.), vehicles (cars, lorries, etc.)…
















Grand Total





Source Gartner

In 2020 there will be more than 20.415.400.000.000 devices connected, and this figure will grow in next years as everithing will be connected to he interner. In fact, the internet will disappear as a concept as it be completely integrated in our lives. See Eric Schmidt talking about this here. Here in text format.

Speed of components

Peter Norvig first and Jeff Dean later provided some numbers that any Computer Scientist or programming related guy should know:

L1 cache reference 0,5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3.000 ns 3 us
Send 1K bytes over 1 Gbps network 10.000 ns 10 us
Read 4K randomly from SSD* 150.000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250.000 ns 250 us
Round trip within same datacenter 500.000 ns 500 us
Read 1 MB sequentially from SSD* 1.000.000 ns 1.000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10.000.000 ns 10.000 us 10 ms 20x datacenter round-trip
Read 1 MB sequentially from 1 Gbps 10.000.000 ns 10.000 us 10 ms 40x memory, 10X SSD
Read 1 MB sequentially from disk 30.000.000 ns 30.000 us 30 ms 120x memory, 30X SSD
Send packet CA->Netherlands->CA 150.000.000 ns 150.000 us 150 ms
Read 30 MB sequentially from disk ns 1.000.000 us 1000 ms 1s –> 30 MB

* Assuming ~1GB/sec SSD

Find also some of these numbers in Intel performance documentation (see Table 2 at end of page 22)

And here an interactive version of the above numbers

One finds many performance numbers in CPU cycles. A 3 GHz CPU will do cycles per second. If an instruction takes 3 cycles, it is executed in 1 nanosecond.

1 nanosecond 0,000000001 seconds 10-9 1 instruction
100 nanoseconds 0,0000001 seconds 10-7 DRAM random read
10 microseconds
(10.000 nanoseconds)
0,00001 seconds 10-5 Ping to localhost
10 milliseconds
(10.000.000 nanoseconds)
0,01 seconds 10-2 HD seek time
(or 1 Linux context switch)

Find here a nice infographic for several CPU operations

With these numbers in mind, one can do some calculations for any given system.

     1 memory access takes 100 times an instruction
     1 MB memory sequential read takes 250 times a memory access
     1 disk seek takes 10.000 times a memory access
     1 MB disk sequential read takes 20.000 a memory access

So, if a program need to read 10 images of 1 MB

     100 ms (disk seeks) + 300 ms (disk reads) = ~ 400 ms (milliseconds)

but if we do reads in parallel,

     10 ms (disk seeks) + 30 ms (disk reads) = ~ 40 ms (milliseconds)

Allow some margin for each read start, so we can say all images will be read in 40-70 ms when reads are parallelized…

Handy conversion guide:

2.5 million seconds per month
1 request per second = 2.5 million requests per month
40 requests per second = 100 million requests per month
400 requests per second = 1 billion requests per month

Handy metrics based on numbers above:

Read sequentially from disk at 30 MB/s
Read sequentially from 1 Gbps Ethernet at 100 MB/s
Read sequentially from SSD at 1 GB/s
Read sequentially from main memory at 4 GB/s
6-7 world-wide round trips per second
2,000 round trips per second within a data center

You have already seen some numbers, are you ready to answer How fast computers are?

Tom’s Hardware metrics

Tom’s Hardware metrics on several components

Network calculations

There is an important difference when doing network calculations, even if these calculations are to get orders of magnitude…

 Network access speed >  Network throughput > [Goodput](

As you can see in Wikipedia, goodput is the application-level throughput (i.e. the number of useful information bits delivered by the network to a certain destination per unit of time). The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent (or delivered) until the last bit of the last packet is delivered.

Network access speed -->    channel capacity or bandwidth    

           Network throughput -->     gross bit rate that is transferred physically    

                            Goodput -->     application-level throughput    

Factors that “decrease” the theoretical capacity are Protocol overhead, Transport layer control flow and congestion avoidance (i.e. TCP slow start) and packet retransmission due congestions.

Wikipedia has a very thorough list of interface bit rates. Here you will jump to the LAN interfaces, but there are many others in the page.

As a basis for fast calculations, consider that the (theoretical) bit rate for a 10 Mb/s LAN, translated to file sizes (Mega Bytes) is 1,25 MB/s. In other words, the theoretical bandwidth (the maximum rate of data transfer across a given path) for an old 10 Mb LAN is 1,25 MB/s. So in a 1 Gb LAN, 125 MB/s can be (theoretically) sent.

When we talk about throughput (or goodput), it means we talk about the maximum rate of production or, regarding networks, the rate of successful message delivery over a communication channel (goodput is the same concept but at the application level).

Many times bandwidth and throughput are confused. In a much theoretical world, sending very big file on a local LAN can reach throughputs close to the real network bandwidth. But when you add controls in transmission protocols, packet collisions, electrical noise, disk seeks or switches, routers (among LANs), etc, throughput falls down to from bandwidth.

Speed of light

There is a physical limit for a any electromagnetic transmission, being it light (via fiber), electricity (via copper) or waves (via microwaves transponders). As Einstein demonstrated, nothing can go faster than light.

Light can travel as fast as 300.000 Km/s (the real number is 299.792 Km/s) This is in vacuum conditions. In other elements (air, fiber or copper) its speed is reduced 23, so one can use 200.000 Km/s.

We can use this speed to calculate how much time will a ping packet take to travel 8.000 Km (aprox. Barcelona to Dallas).