Difference Between Pipelining And Super Scalar Processor

Decent Essays
Comparison of pipelining, super pipelining, super scalar processors

Presented By:
Aashrav Shah-13bec094

Abstract— In these report I basically cover pipelining, super pipelining and super scalar processors and also compare how they differ from each other. A pipelining technique is a technique where the microprocessor begins executing a second instruction before the first instruction has been completed. Super pipeline is an alternative approach to achieve greater (faster) performance because many pipeline stages need half a clock cycle meaning that when it execute one instruction it already executing start second instruction and it execute half of the second instruction when first instruction is fully executed. Superscalar processors processing
…show more content…
So basically pipelining technique does not wait until result for first instruction is obtained but starts to operate a new set of instruction. Super pipelining is basically same as pipelining and Like if a pipe is divided into two parts (sub) it develops twice the normal execution speed in ideal case. Super scalar processor is found in Pentium processors. After explaining all these I basically compare them in my report after covering all these and at last I come to know that execution time of super scalar is less as compared to other system like pipelining and super pipelining these is because in superscalar each stage can execute two instruction per clock …show more content…
Write: This stage is responsible for write back the calculation that we get in the execute stage.

Superscalar v/s super-pipeline
• Simple pipeline system performs only one pipeline stage per clock cycle
• Super-pipeline system is capable of performing two pipeline stages per clock cycle
• Superscalar performs only one pipeline stage per clock cycle in each parallel pipeline
• In super pipeline a sub-stage takes half clock cycle to finish the process.
• In super scalar each pipeline stage can execute two instruction.

Conclusion These paper contains pipelining, super pipelining and super scalar system basic information their advantages and disadvantages .in these paper comparison of different system is also given so based on these analysis the rank of their instruction execution is given as under.
1. Super scalar
2. Super pipelining
3. Pipelining Basically these ranks is given because of the reason that super scalar execute two instruction per clock cycle whereas super pipelining execute one and half instruction per clock cycle and pipelining execute one instruction per cycle.

References
[1] Microprocessors and microcontrollers by senthil Kumar.
[2] Microprocessors architecture by Jean Loup Baer.
[3]

Related Documents

  • Improved Essays

    Nt1310 Unit 4 Test Paper

    • 419 Words
    • 2 Pages

    1. Mirroring provides fault tolerance. 2. Stripping improves access spread. 3.…

    • 419 Words
    • 2 Pages
    Improved Essays
  • Decent Essays

    Interrupts are most important as they give user the better control over the computer. Without interrupts, a user have to wait for a given task to complete in a specific time management with the higher priotity. This ensures that the CPU will charge the process immediately. 2. What is a trap?…

    • 681 Words
    • 3 Pages
    Decent Essays
  • Improved Essays

    Pt1420 Unit 5 Lab 1

    • 382 Words
    • 2 Pages

    cs61002: Algorithms and Programing 1 rbattul1 Lab Assignment 5: Summary of chapters 2.1 to 2.6 of python scientific notes. Summary:…

    • 382 Words
    • 2 Pages
    Improved Essays
  • Improved Essays

    Nt1310 Unit 7

    • 507 Words
    • 3 Pages

    2. Access to data stored in registers is immediate, but access to data stored in memory locations is slower. Most programs require a temporary storage area for intermediate results and frequently needed variables or constants. General-purpose registers serve this purpose. As the number of general-purpose registers increases, more of these items can be held in registers, reducing the need to access memory for these items.…

    • 507 Words
    • 3 Pages
    Improved Essays
  • Improved Essays

    Concurrency loss due to IO path critical sections (IPCS) Time lost in waiting queue (WT) to enter a critical section affects an application concurrency (AC), and impedes the application performance from scaling (AC ∝ 1 / WT) if a workload is bursty and highly parallelized. It is apparent that the wait time to enter a critical section is a function of the CS’s size (CSS) and the number of waiting threads (NWT) i.e. WT ∝ CSS * NWT. Furthermore, the work done (IO completed) is proportional to the trips made through the CSSs (i.e. IOs completed ∝ NWT). Therefore, existence of CSSs in IO path, high wait time, and large CSS’s size affects an application concurrency, and demands the measures to minimize their effects.…

    • 560 Words
    • 3 Pages
    Improved Essays
  • Improved Essays

    Nt1330 Unit 1 Study

    • 419 Words
    • 2 Pages

    1.2.1 Studying Server Consolidation Server consolidation is an approach to the capable usage of computer server sources in order to reduce the total number of servers or server location that an organization needs. The practice was developed in reply to the problem of “server sprawl,” a position in which several under-utilized servers take up more space and consume more sources than can be acceptable by their workload. SERVER PRODUCT ARCHITECTURE A few definitions provide a good starting point. Three terms are important to VMware: 1.…

    • 419 Words
    • 2 Pages
    Improved Essays
  • Improved Essays

    And one of the most significant challenges in the field of computer architecture is the memory hierarchy and the corresponding data movement between different levels with varying bandwidth and access times. This paper has suggested a smart approach of moving data in the form of “tiles” and hence ensuring that in case of high dimensional data, the global memory is accessed the minimum number of times. The organization and the approach of the paper consistently keeps the GPU architecture in mind and suggest parallelization steps…

    • 971 Words
    • 4 Pages
    Improved Essays
  • Decent Essays

    Nt1330 Unit 5

    • 541 Words
    • 3 Pages

    The improvements only deal with B.W. while the latency is the…

    • 541 Words
    • 3 Pages
    Decent Essays
  • Improved Essays

    Amd Xx-8320 Unit 5

    • 1023 Words
    • 5 Pages

    If you don’t know anything about central processing units, then you should probably go sky diving and jump without a parachute. Today, we will deliberate about the revolutionary technology that is being brought to us by AMD and Intel. Before we start, I would like to clear any confusion by going over terms that should be discerned and understood. Overclocking is the process that involves virtually increasing the speed/frequency of a processor. Cores and threads are the exact same thing, and they may be used interchangeably to refer to the physical portions of the CPU.…

    • 1023 Words
    • 5 Pages
    Improved Essays
  • Decent Essays

    Unit 4 Ps154 Assignment

    • 421 Words
    • 2 Pages

    They have a word size of 64 bits, which aids memory and data intensive applications (eg…) and are compatible with older operating systems; this eliminates costs of buying new systems. Physics research can be very detailed and deal with a large amount of information and numbers, 64-bit CPUs can cope with this in a faster more accurate way. Physicists work on advancements in science and knowledge of how and why things work, if there are computer processing units that are more advanced and help get work done more easily and quickly then there are only advantages to using them. 2) How does the concept of distributed computing differ from that of parallel computing? Distributed computing (“a computer system spread over several machines, especially over a network”) are groups of individual computers connected so that they can work together in order to complete a task.…

    • 421 Words
    • 2 Pages
    Decent Essays
  • Improved Essays

    The problem or the issue addressed is on how to parallelize the computation, distribute the data, and handle failures conspire to obscure the original simple computation with large amounts of complex code to deal with these issues. Contributions are simple powerful interface that gives parallelization and distribution of large scale systems. So to tackle the issue of parallelization, fault tolerance and distribution of data, they acquired the map and reduce primitives. The use of a functional model with user-specified map and reduce operations allows us to parallelize large computations easily and to use re-execution as the primary mechanism for fault tolerance.…

    • 868 Words
    • 4 Pages
    Improved Essays
  • Decent Essays

    · Memory: This is any computer component that has the ability to storing information or data temporary or permanently, e.g. Read And Write Memory (RAM) and Not Volatile Read And Write Memory (NVRAM). · CPU: This is the brain of the computer and is the central process unit, and it received command/instruction from the computer software and sends the instruction to the right hardware to execute the instruction, and also received the feedback from the hardware and forwards it to the software. · Storage: This is the part of the computer where information or data have been kept temporary or permanently.…

    • 363 Words
    • 2 Pages
    Decent Essays
  • Improved Essays

    Central Processing Unit (CPU) which is made up of three major components, the arithmetic/logic unit (ALU), the control unit (CU) and memory by combining the ALU and CU together you get the CPU), (Englander, I. 2014). The arithmetic/logic unit holds data temporary and where calculation are processed the control units controls and deciphers the execution command and follow the instruction that goes with the sequence of actions. The control unit determines the particular instruction to be executed by reading the program counter (PC) (Englander, I. 2014). Primary memory holds program instructions and data and interacts directly with the CPU during program execution. The control unit also reads and interprets instructions from memory and transforms them into a series of signals to activate other parts of the computer.…

    • 567 Words
    • 3 Pages
    Improved Essays
  • Improved Essays

    Importance Of MIS

    • 969 Words
    • 4 Pages

    As more and more transistors fit in on an integrated chip, the better the performance of said chip improves, at a lower cost. This law is named after Gordon Moore, cofounder of Intel corporation. It is the main reason behind the downward shift of the price of computer technology in relation to its performance in the past years including today’s time – data storage and data communications became essentially zero. This is important in the business school…

    • 969 Words
    • 4 Pages
    Improved Essays
  • Improved Essays

    Divide and Conquer Strategies: Divide and conquer is an algorithm which design paradigm based on multi-branched recursion. This designed paradigm consists of following phases: 1) Break the problem (divide): Breaking the problem into several sub-problems that are smaller in size. 2) Solve the sub problem(conquer) : Solve the sub-problem recursively .…

    • 718 Words
    • 3 Pages
    Improved Essays