Scheduling Operating System Outline

 www.phwiki.com

 

The Above Picture is Related Image of Another Journal

 

Scheduling Operating System Outline

Bauder College, US has reference to this Academic Journal, CS5226 2002 Operating System & Database Performance Tuning Xiaofang Zhou School of Computing, NUS Office: S16-08-20 Email: zhouxf@comp.nus .sg URL: itee.uq .au/~zxf Outline Part 1: Operating systems in addition to DBMS Part 2: OS-related tuning Operating System Operating system is an interface between hardware in addition to other software, supporting: Processes in addition to threads; Paging, buffering in addition to IO scheduling Multi-tasking File system Other utilities such as timing, networking in addition to performing monitoring

 Shipley, Chris Bauder College www.phwiki.com

 

Related University That Contributed for this Journal are Acknowledged in the above Image

 

Scheduling Process vs thread Scheduling based on time-slicing, IO, priority etc Different from transaction scheduling The cost of content switching When switch is desirable? And when is not? The administrator can set priorities so that processes/threads Case 1: the DBMS runs at a lower priority Case 2: different transactions run at different priority Case 3: online transactions alongside higher priority than offline transactions Priority Inversion Let priorities T1 > T2s > T3 ? a solution: priority inheritance Database Buffers Application buffers DBMS buffers OS buffers An application can have its own in-memory buffers (e.g., variables in the program; cursors); A logical read/write will be issued so that the DBMS if the data needs so that be read/written so that the DBMS; A physical read/write is issued by the DBMS using its systematic page replacement algorithm. And such a request is passed so that the OS. OS may initiate IO operations so that support the virtual memory the DBMS buffer is built on.

Database Buffer Size Buffer too small, then hit ratio too small hit ratio = (logical acc. – physical acc.) / (logical acc.) Buffer too large, paging Recommended strategy: monitor hit ratio in addition to increase buffer size until hit ratio flattens out. If there is still paging, then buy memory. Buffer Size – Data Settings: employees(ssnum, name, lat, long, hundreds1, hundreds2); clustered index c on employees(lat); (unused) 10 distinct values of lat in addition to long, 100 distinct values of hundreds1 in addition to hundreds2 20000000 rows (630 Mb); Warm Buffer Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000 RPM), Windows 2000. Buffer Size – Queries Queries: Scan Query select sum(long) from employees; Multipoint query select * from employees where lat = ?;

Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 1: Laser-Based Distance Measurer Step 2: Laser-Based Distance Measurer Step 2: Laser-Based Distance Measurer

Database Buffer Size SQL Server 7 on Windows 2000 Scan query: LRU (least recently used) does badly when table spills so that disk as Stonebraker observed 20 years ago. Multipoint query: Throughput increases alongside buffer size until all data is accessed from RAM. It?s All About $$$ Buffering is about a trade-off between speed in addition to cost A (18 GB) disk offers 170 random access in consideration of $300 ? the access cost A=$1.76 per access per second RAM ? C=$0.5/MB Page size B = 8 KB Page p is accessed every I=200 s Keep page p in memory? Yes: cost C/1024*B = $0.0039 in consideration of 8KB RAM No: cost A/I = $0.0088 So, p is in memory until its access interval reaches ??? s Multiprogramming Levels More concurrent users Better utilization of CPU cycles (and other system resources) Risk of excessive page swapping More lock conflicts So how many exactly Depends on transaction profiles Experiments so that find the best value And this parameter may change when application patterns change

Disk Layout in addition to Access Larger disk allocation chunks improves write performance At the cost of disk utilisation Setting disk usage factor Low when expecting updates/inserts Higher in consideration of scan-type of queries Using prefetching For non-random accesses Scan Performance – Data Settings: lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY, L_LINENUMBER , L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT ); 600 000 rows Lineitem tuples are ~ 160 bytes long Cold Buffer Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000. Scan Performance – Queries Queries: select avg(l_discount) from lineitem;

Usage Factor DB2 UDB v7.1 on Windows 2000 Usage factor is the percentage of the page used by tuples in addition to auxiliary data structures (the rest is reserved in consideration of future) Scan throughput increases alongside usage factor. Prefetching DB2 UDB v7.1 on Windows 2000 Throughput increases up so that a certain point when prefetching size increases. Summary In this module, we have covered: A review of OS from the DBMS perspective How so that optimise OS-related parameters in addition to options Thread Buffer, in addition to File system Next: tuning the hardware

Shipley, Chris Glendale Editor

Shipley, Chris is from United States and they belong to Glendale Editor and work for Arizona Republic – Arrowhead Branch, The in the AZ state United States got related to this Particular Article.

Journal Ratings by Bauder College

This Particular Journal got reviewed and rated by and short form of this particular Institution is US and gave this Journal an Excellent Rating.