POLARIS GROUP FOCUSES ON AUTOMATIC PROGRAM PARALLELIZATION 07.12.96 by Alan Beck, managing editor HPCwire ============================================================================= West Lafayette, Ind. -- Dedicated to implementing fully automatic parallelization of programs, Polaris, a joint project by the University of Illinois and Purdue University, targets both high-performance parallel computers with global address space and shared-memory multiprocessors. To obtain a broader understanding of Polaris' capabilities and directions, HPCwire interviewed one of its principal investigators, Rudolf Eigenmann, a professor at Purdue's School of Electrical and Computer Engineering. Following are selected excerpts from that discussion. -------------------------- HPCwire: Please give us a brief overview of Polaris' parallel compiler. EIGENMANN: "Polaris is a source-to-source restructurer. It takes sequential Fortran programs and identifies parallelism, mainly on a loop basis. It scans the program and tries to identify if each loop in the source text is independent, i.e. if there is no overlap in the data access between different iterations. If this proves to be the case, the back-end compiler, which generates code, will be able to assign different iterations to different processors." HPCwire: Does that mean it's a coarse- or fine-grained approach? EIGENMANN: "Coarse. We don't do instruction-level parallelization like some compilers. We detect parallelism at the higher loop level. This is complementary to doing fine-grained parallelization, which is usually done in what we refer to as the back-end compilers, i.e. the code-generating compilers." HPCwire: Exactly how effective is that strategy? EIGENMANN: "For the end-user who knows nothing about parallel processing, who justs wants to buy a parallel processor and run programs on it, parallelizing compilers are not yet at a stage where they can support this successfully in all cases. In our study of several benchmark suites, Polaris does a reasonable job of detecting parallelism about half the time. This is impressive, considering that when we first started the project we found the state-of-the-art compilers that were available could do the job only about 20 percent of the time." HPCwire: How would you compare Polaris to what's being done by the SUIF group? EIGENMANN: "In a sense, SUIF and Polaris are pursuing similar goals. Some might say we're competitors, but we're basically looking at some of the same issues and finding independent solutions. As we do, they try to find high- level parallelism. But their background is more in fine-grained aspects; that's the direction they've come from. Our background is a little stronger in coarse-grained parallelization. "SUIF has worked very hard on interprocedural analysis, technology originally developed at Rice. This is an important issue, since compilers need all possible information for optimizing code. Interprocedural analysis allows the compiler to achieve a more global view by gathering information from the entire program to do its optimization. SUIF has relatively advanced techniques for this kind of analysis. Although Polaris does not do the same kind of thing, our techniques allow us to generate code -- and our measurements support this contention -- that is as good or better than SUIF. "First, we do partial subroutine inline expansion: that means we replace subroutine calls with the entire subroutine that's called. We do this starting at the lowest level of the call tree. Then we implement a simpler form of interprocedural analysis than SUIF, called symbolic expression propagation. From then on, we work only within procedures." HPCwire: Is Polaris a finished product? EIGENMANN: "It is finished as a university product: it exists; it generates code; we can run substantial programs through it. For example, we've run the entire Perfect benchmark suite and some of the so-called Grand Challenge applications. However, although it is almost an "industrial-strength" product, it would not be a finished product in an industrial sense." HPCwire: How close is Polaris to meeting its commercial potential? EIGENMANN: "It is almost ready. We've seen a number of commercial products struggling with bugs similar to Polaris' -- so its reliability is really similar to many commercial products. Nevertheless, it's not actually our intention to turn it into an industrial product. I see Polaris technology transferred to industry via vendors of parallelizing compilers, independent software vendors or computer manufacturers. They would learn about these techniques, discuss them with us and incorporate them in their own products." HPCwire: Does Polaris presently have affiliations with any vendors? EIGENMANN: "We have a strong link with Kuck & Associates Inc. of Champaign, Illinois. They're an established, independent vendors of parallelizing compiler technology. Their KAP compiler now runs on SMPs from Digital, IBM, Sun, SGI, and on Windows NT platforms. They have already transferred much Polaris technology into KAP." HPCwire: Will there ever be a fully automagical parallel compiler? EIGENMANN: "I could answer yes or no. If I say yes, I mean it's possible -- but the performance the user gets will not be the best possible from a given machine. However, if optimal performance is needed, parallelizing compilers are just starting points. After the basic parallelism has been recognized, users wanting top performance must invest a little more effort." --------------------- More information is available on the Polaris Web site: http://www.csrd.uiuc.edu/polaris/polaris.html. ************************************************************************** H P C w i r e S P O N S O R S Product specifications and company information in this section are available to both subscribers and non-subscribers. 936) Sony 905) Maximum Strategy 937) Digital Equipment 934) Convex Computer Corp. 930) HNSX Supercomputers 932) Portland Group 921) Cray Research Inc. 902) IBM Corp. 915) Genias Software 909) Fujitsu 904) Intel Corp. 935) Silicon Graphics **************************************************************************** Copyright 1996 HPCwire. Redistribution of this article is forbidden by law without the expressed written consent of the publisher. For a free trial subscription to HPCwire, send e-mail to trial@hpcwire.tgc.com.