cpu-architecturenumericaldma

Interrupt time in DMA operation


I'm facing difficulty with the following question :

Consider a disk drive with the following specifications .

16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever 1 byte word is ready it is sent to memory; similarly for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory Cycle time is 40 ns. The maximum percentage of time that the CPU gets blocked during DMA operation is?

the solution to this question provided on the only site is :

  Revolutions Per Min = 3000 RPM 
     or   3000/60 = 50 RPS 
  In 1 Round it can read = 512 KB 
  No. of tracks read per second = (2^19/2^2)*50
                                = 6553600 ............. (1)
          Interrupt = 6553600 takes 0.2621 sec
          Percentage Gain = (0.2621/1)*100
                          = 26 %

I have understood till (1).

Can anybody explain me how has 0.2621 come ? How is the interrupt time calculated? Please help .


Solution

  • Reversing form the numbers you've given, that's 6553600 * 40ns that gives 0.2621 sec.

    One quite obvious problem is that the comments in the calculations are somewhat wrong. It's not

    Revolutions Per Min = 3000 RPM ~ or   3000/60 = 50 RPS 
    In 1 Round it can read = 512 KB 
    No. of tracks read per second = (2^19/2^2)*50   <- WRONG
    

    The numbers are 512K / 4 * 50. So, it's in bytes. How that could be called 'number of tracks'? Reading the full track is 1 full rotation, so the number of tracks readable in 1 second is 50, as there are 50 RPS.

    However, the total bytes readable in 1s is then just 512K * 50 since 512K is the amount of data on the track.

    But then it is further divided by 4..

    So, I guess, the actual comments should be:

    Revolutions Per Min = 3000 RPM ~ or   3000/60 = 50 RPS 
    In 1 Round it can read = 512 KB 
    Interrupts per second = (2^19/2^2) * 50 = 6553600 (*)
    

    Interrupt triggers one memory op, so then:

    total wasted: 6553600 * 40ns = 0.2621 sec. 
    

    However, I don't really like how the 'number of interrupts per second' is calculated. I currently don't see/fell/guess how/why it's just Bytes/4.

    The only VAGUE explanation of that "divide it by 4" I can think of is:

    At each byte written to the controller's memory, an event is triggered. However the DMA controller can read only PACKETS of 4 bytes. So, the hardware DMA controller must WAIT until there are at least 4 bytes ready to be read. Only then the DMA kicks in and halts the bus (or part of) for a duration of one memory cycle needed to copy the data. As bus is frozen, the processor MAY have to wait. It doesn't NEED to, it can be doing its own ops and work on cache, but if it tries touching the memory, it will need to wait until DMA finishes.

    However, I don't like a few things in this "explanation". I cannot guarantee you that it is valid. It really depends on what architecture you are analyzing and how the DMA/CPU/BUS are organized.