Skip to content

Download e-book for kindle: Parallel computing works! by Geoffrey C. Fox

By Geoffrey C. Fox

ISBN-10: 0080513514

ISBN-13: 9780080513515

ISBN-10: 1558602534

ISBN-13: 9781558602533

A transparent representation of the way parallel desktops could be effectively applied
to large-scale medical computations. This ebook demonstrates how a
variety of purposes in physics, biology, arithmetic and different sciences
were applied on genuine parallel desktops to supply new scientific
results. It investigates problems with fine-grained parallelism proper for
future supercomputers with specific emphasis on hypercube architecture.

The authors describe how they used an experimental method of configure
different hugely parallel machines, layout and enforce easy system
software, and enhance algorithms for often used mathematical
computations. in addition they devise functionality versions, degree the performance
characteristics of a number of desktops, and create a high-performance
computing facility established completely on parallel pcs. through addressing
all concerns interested by medical challenge fixing, Parallel Computing
Works!
presents beneficial perception into computational technological know-how for large-scale
parallel architectures. For these within the sciences, the findings display the
usefulness of a major experimental instrument. someone in supercomputing and
related computational fields will achieve a brand new standpoint at the potential
contributions of parallelism. comprises over 30 full-color illustrations.

Show description

Read or Download Parallel computing works! PDF

Best design & architecture books

Read e-book online Chip Multiprocessor Architecture: Techniques to Improve PDF

Chip multiprocessors - also known as multi-core microprocessors or CMPs for brief - are actually the one solution to construct high-performance microprocessors, for quite a few purposes. huge uniprocessors are not any longer scaling in functionality, since it is just attainable to extract a constrained volume of parallelism from a customary guide circulation utilizing traditional superscalar guide factor strategies.

Download e-book for iPad: Principles of Data Conversion System Design by Behzad Razavi

This complex textual content and reference covers the layout and implementation of built-in circuits for analog-to-digital and digital-to-analog conversion. It starts off with uncomplicated ideas and systematically leads the reader to complicated issues, describing layout matters and methods at either circuit and approach point.

William J. Dally (auth.)'s A VLSI Architecture for Concurrent Data Structures PDF

Concurrent facts buildings simplify the improvement of concurrent courses through encapsulating popular mechanisms for synchronization and commu­ nication into information buildings. This thesis develops a notation for describing concurrent info constructions, offers examples of concurrent info constructions, and describes an structure to help concurrent facts buildings.

Extra info for Parallel computing works!

Example text

Another important system during this period was the Alliant [Karplus:87a, pp. 35-44] . The initial model featured up to eight vector processors, each of moderate performance. But when used simultaneously, they provided performance equivalent to a sizable fraction of a CRAY proces­ sor. A unique feature at the time was a Fortran compiler that was quite good at automatic vectorization and also reasonably good at paralleliza­ tion. These compiler features, coupled with its shared memory, made this system relatively easy to use and to achieve reasonably good performance.

There are some new Japanese vector su- 26 CHAPTER 2. TECHNICAL BACKDROP percomputers with a small number of processors (but a large number of instruction pipelines) that have peak speeds of over 20 GFLOPS. Finally, the vector computers continued to become faster and to have more processors. For example, the CRAY Y-MP C-90 that was introduced in 1992 has sixteen processors and a peak speed of 16 GFLOPS. By 1992, parallel computers were substantially faster. As was noted above, the Intel Paragon has a peak speed of 300 GFLOPS.

Further, C++ allows one to define more general data structures than the Fortran array; corre­ spondingly pC++ supports general parallel collections. Other languages that have seen some use include Linda [Gelertner:89a] , [Ahuj a:86a] , and Strand [Foster:90a] . Linda has been particularly successful especially as a coordination language allowing one to link the many indi­ vidual components of what we term metaproblems-a concept developed throughout this book and particularly in Chapters 3 and 18.

Download PDF sample

Parallel computing works! by Geoffrey C. Fox


by Edward
4.2

Rated 4.74 of 5 – based on 47 votes