DataStar Access Batch/Interactive computing DataStar Overview

 www.phwiki.com

 

The Above Picture is Related Image of Another Journal

 

DataStar Access Batch/Interactive computing DataStar Overview

California State University, Dominguez Hills, US has reference to this Academic Journal, Using parallel tools on the SDSC IBM DataStar DataStar Overview HPM Perf IPM VAMPIR TotalView DataStar Overview P655 :: ( 8-way, 16GB) 176 nodes P655+ :: ( 8-way, 32GB) 96 nodes P690 :: ( 32-way, 64GB) 2 nodes P690 :: ( 32-way, 128GB) 4 nodes P690 :: ( 32-way, 256GB) 2 nodes Total ? 280 nodes :::: 2,432 processors. Batch/Interactive computing Batch Job Queues: Job queue Manager ? Load Leveler (tool from IBM) Job queue Scheduler ? Catalina (SDSC internal tool) Job queue Monitoring ? Various tools (commands) Jobs Accounting ? Job filter (SDSC internal PERL scripts)

 Friedenreich, Ken California State University, Dominguez Hills www.phwiki.com

 

Related University That Contributed for this Journal are Acknowledged in the above Image

 

DataStar Access Three Login Nodes :: Access modes (platforms) (usage mode) dslogin.sdsc :: Production runs (P690, 32-way, 64GB) dspoe.sdsc :: Test/debug runs (P655, 8-way, 16GB) dsdirect.sdsc :: Special needs (P690, 32-way, 256GB) Note : Above Usage modes division is not very strict. Test/debug runs (Usage from dspoe) [dspoe.sdsc :: P655, 8-way, 16GB] Access so that two queues: P655 nodes [shared] P655 nodes [Not ? shared] Job queues have Job filter + Load Leveler only (very fast) Special command line submission (along alongside job script).

Production runs (Usage from dslogin) [dslogin.sdsc :: P690, 32-way, 64GB] Data transfer/ Src editing/Compliation etc? Two queues: Onto p655/p655+ nodes [not shared] Onto p690 nodes [shared] Job ques have Job filter + LoadLeveler + Catalina (Slowupdates) All Special needs (Usage from dsdirect) [dsdirect.sdsc :: P690, 32-way, 256GB] All Visualization needs All post data analysis needs Shared node (with 256 GB of memory) Process accounting in place Total (a.out) interactive usage. No Job filter, No Load Leveler, No Catalina IBM Hardware Performance Monitor (hpm)

Engineers as Managers/Leaders Chapter Contents Engineering Leadership Why So? Career Path of Engineers Mid-level Positions Dual Ladder Mid-level Positions Mid-level Technical Mid-level Managerial Remarks on Mid-level Positions Executive Level Positions Work Contents Goals in consideration of All Levels: Add Value National Science Foundation Study (2000) To Manage or Not so that Manage – Pros To Manage or Not so that Manage – Cons Success in addition to Happiness How so that Get Promoted Managerial Competency Question # 10.1 Leaders in addition to Managers Emotional Intelligence Failure Factors in consideration of Engineering Managers Most Common Reasons in consideration of Career Failures in consideration of Engineers (A) Poor Interpersonal Skills (B) Wrong Fit (C) Not Able so that Take Risks (D) Bad Luck (E) Self-destructive Behavior (F) Lack of Focus (G) Workplace Biases Question # 10.12 Question # 10.14 What Takes so that be Successful in Corporate America Success Factors Success Factors (cont?d) Success Factors (cont?d) Success Factors (cont?d) Career Strategy in consideration of the 21st Century Career Strategy in consideration of the 21st Century(cont?d) Summary in addition to Conclusions References Question # 10.3 Question # 10.4 Question # 10.5 The Engineer of 2020 The Engineer of 2020 The Engineer of 2020

What is Performance? – Where is time spent in addition to how is time spent? MIPS ? Millions of Instructions Per Second MFLOPS ? Millions of Floating-Point Operations Per Second Run time/CPU time What is a Performance Monitor? Provides detailed processor/system data Processor Monitors Typically a group of registers Special purpose registers keep track of programmable events Non-intrusive counts result in ?accurate? measurement of processor events Typical Events counted are Instruction, floating point instruction, cache misses, etc. System Level Monitors Can be hardware or software Intended so that measure system activity Examples: bus monitor: measures memory traffic, can analyze cache coherency issues in multiprocessor system Network monitor: measures network traffic, can analyze web traffic internally in addition to externally Hardware Counter Motivations To understand execution behavior of application code Why not use software? Strength: simple, GUI interface Weakness: large overhead, intrusive, higher level, abstraction in addition to simplicity How about using a simulator? Strength: control, low-level, accurate Weakness: limit on size of code, difficult so that implement, time-consuming so that run When should we directly use hardware counters? Software in addition to simulators not available or not enough Strength: non-intrusive, instruction level analysis, moderate control, very accurate, low overhead Weakness: not typically reusable, OS kernel support

Ptools Project PMAPI Project Common standard API in consideration of industry Supported by IBM, SUN, SGI, COMPAQ etc PAPI Project Standard application programming interface Portable, available through a module Can access hardware counter info HPM Toolkit Easy so that use Doesn?t effect code performance Use hardware counters Designed specifically in consideration of IBM SPs in addition to Power Problem Set Should we collect all events all the time? Not necessary in addition to wasteful What counts should be used? Gather only what you need Cycles Committed Instructions Loads Stores L1/L2 misses L1/L2 stores Committed fl pt instr Branches Branch misses TLB misses Cache misses IBM HPM Toolkit High Performance Monitor Developed in consideration of performance measurement of applications running on IBM Power3 systems. It consists of: An utility (hpmcount) An instrumentation library (libhpm) A graphical user interface (hpmviz). Requires PMAPI kernel extensions so that be loaded Works on IBM 630 in addition to 604e processors Based on IBM?s PMAPI ? low level interface

HPM Count Utilities in consideration of performance measurement of application Extra logic inserted so that the processor so that count specific events Updated at every cycle Provide a summary output at the end of the execution: Wall clock time Resource usage statistics Hardware performance counters information Derived hardware metrics Serial/Parallel, Gives each performance numbers in consideration of each task Timers Time usually reports three metrics: User Time The time used by your code on CPU, also CPU time Total time in user mode = Cycles/Processor Frequency System Time The time used by your code running kernel code (doing I/O, writing so that disk, or printing so that the screen etc). It is worth so that minimize the system time, by speeding up the disk I/O, doing I/O in parallel, or doing I/O in background while your CPU computes in the foreground Wall Clock time Total execution time, the combination of the time 1 in addition to 2 plus the time spent idle (waiting in consideration of resources) In parallel performance tuning, only wall clock time counts Interprocessor communication consumes a significant amount of your execution time (user/system time usually don?t account in consideration of it), need so that rely on wall clock time in consideration of all the time consumed by the job Floating Point Measures PM_FPU0_CMPL (FPU 0 instructions) The POWER3 processor has two Floating Point Units (FPU) which operate in parallel. Each FPU can start a new instruction at every cycle. This counter shows the number of floating point instructions that have been executed by the first FPU. PM_FPU1_CMPL (FPU 1 instructions) This counter shows the number of floating point instructions (add, multiply, subtract, divide, multiply & add) that have been processed by the second FPU. PM_EXEC_FMA (FMAs executed) This is the number of Floating point Multiply & Add (FMA) instructions. This instruction does a computation of following type x = s * a + b So two floating point operations are done within one instruction. The compiler generate this instruction as often as possible so that speed up the program. But sometimes additional manual optimization is necessary so that replace single multiply instructions in addition to corresponding add instructions by one FMA.

Total Flop Rate Float point instructions + FMA rate This is the most often mentioned performance index, the MFlops rate. The peak performance of the POWER3-II processor is 1500 MFlops. (375 MHZ clock x 2 FPUs x 2 Flops/FMA instruction). Many applications do not reach more than 10 percent of this peak performance. Average number of loads per TLB miss This value is the ratio PM_LD_CMPL / PM_TLB_MISS. Each time after a TLB miss has been processed, fast access so that a new page of data is possible. Small values in consideration of this metric indicate that the program has a poor data locality, a redesign of the data structures in the program may result in significant performance improvements. Computation intensity Computational intensity is the ratio of Load in addition to store operations in addition to Floating point operations PERF The perf utility provides a succinct code performance report so that help get the most out of HPM output or MPI_Trace output. It can help make your case in consideration of an allocation request.

Trace Libraries IBM Trace Libraries are a set of libraries used in consideration of MPI performance instrumentation. These libraries can measure the amount of time spent in each routine, what function was used, in addition to how many bytes were sent. To use a library: Compile your code alongside the -g flag Relink your object files. For example, in consideration of mpitrace: -L/usr/local/apps/mpitrace -lmpiprof Make sure your code exits through mpi_finalize. It will produce mpi_profile.task_number output files. Perf The perf utility provides a succinct code performance report so that help get the most out of HPM output or MPI_Trace output. It can help make your case in consideration of an allocation request. To use perf: Add /usr/local/apps/perf/perf so that your path OR Alias it in your .cshrc file: alias perf ‘/usr/local/apps/perf/perf !*’ Then run it in the same directory as your output files: perf hpm_out > perf_summary Example of perf_summary Computation performance measured in consideration of all 4 cpus: Execution wall clock time = 11.469 seconds Total FPU arithmetic results = 5.381e+09 (31.2% of these were FMAs) Aggregate flop rate = 0.619 Gflop/s Average flop rate per cpu = 154.860 Mflop/s = 2.6% of `peak? Communication wall clock time in consideration of 4 cpus: max = 0.019 seconds min = 0.000 seconds Communication took 0.17% of total wall clock time.

IPM – Integrated Performance Monitoring Integrated Performance Monitoring (IPM) Integrated Performance Monitoring (IPM) is a tool that allows users so that obtain a concise summary of the performance in addition to communication characteristics of their codes. IPM is invoked by the user at the time a job is run. By default, a short, text-based summary of the code’s performance is provided, in addition to a more detailed Web page. More details at: sdsc /us/tools/top/ipm/ VAMPIR ? Visualization in addition to Analysis of MPI Programs

VAMPIR Much harder so that debug in addition to tune parallel programs than sequential ones. The reasons in consideration of performance problems, in particular, are notoriously hard so that find. Assume that the performance is disappointing.Initially, the programmer has no idea where in addition to in consideration of what so that look so that identify the performance bottleneck. VAMPIR converts the trace information into a variety of graphical views, e.g.: timeline displays showing state changes in addition to communication, communication statistics indicating data volumes in addition to transmission rates, in addition to more. Setting the Vampir path in addition to variables: setenv PAL_LICENSEFILE /usr/local/apps/vampir/etc/license.dat set path = ($path /usr/local/apps/vampir/bin) Compile: mpcc ?o parpi ?L/usr/local/apps/vampirtrace/lib ?lVT ?lm ?lld parpi.c Run: poe parpi ?nodes 1 ?tasks_per_node 4 -rmpool 1 ?euilib us ?euidevice sn_all Calling Vampir: vampir parpi.stf

Discovering TotalView The Etnus TotalView? debugger is a powerful, sophisticated, in addition to programmable tool that allows you so that debug, analyze, in addition to tune the performance of complex serial, multiprocessor, in addition to multithreaded programs. If you want so that jump in in addition to get started quickly, you should go so that the Website at etnus in addition to select TotalView’s “Getting Started” area. (It’s the blue oval link on the right near the bottom.)

Friedenreich, Ken Host

Friedenreich, Ken is from United States and they belong to Host and work for Good Day Arizona – KTVK-TV in the AZ state United States got related to this Particular Article.

Journal Ratings by California State University, Dominguez Hills

This Particular Journal got reviewed and rated by IPM – Integrated Performance Monitoring Integrated Performance Monitoring (IPM) Integrated Performance Monitoring (IPM) is a tool that allows users so that obtain a concise summary of the performance in addition to communication characteristics of their codes. IPM is invoked by the user at the time a job is run. By default, a short, text-based summary of the code’s performance is provided, in addition to a more detailed Web page. More details at: sdsc /us/tools/top/ipm/ VAMPIR ? Visualization in addition to Analysis of MPI Programs and short form of this particular Institution is US and gave this Journal an Excellent Rating.