ࡱ> fehY wbjbjWW ==ml]dddt\\\D ,LlF" "D D Z 0 <$\D> DDxD Z xxxD D \Z DxxB!\Z $mFƽJ.  FILLIN \* MERGEFORMAT Massive Stochastic Testing of SQL Don Slutz  FILLIN \* MERGEFORMAT   FILLIN \* MERGEFORMAT August 12, 1998 Technical Report  FILLIN \* MERGEFORMAT MSR-TR-98-21 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 A shorter form of this paper was accepted for VLDB 98 and appears in the Proceeding of the 24th VLDB Conference, New York, USA, 1998. Massive Stochastic Testing of SQL Don Slutz Microsoft Research, 301 Howard St. #830, SF, CA 94105 dslutz@Microsoft.com Abstract: Deterministic testing of SQL database systems is human intensive and cannot adequately cover the SQL input domain. A system (RAGS), was built to stochastically generate valid SQL statements 1 million times faster than a human and execute them. This paper describes RAGS and the results from turning it lose on several commercial SQL systems. 1. Testing SQL is Hard Good test coverage of commercial SQL database systems is very hard. The input domain, all SQL statements, from any number of users, combined with all states of the database, is gigantic. It is also difficult to verify output for positive tests because the semantics of SQL are complicated. Software engineering technology exists to predictably improve quality ([1] for example). The techniques involve a software development process including unit tests and final system validation tests (to verify the absence of bugs). This process requires a substantial investment so commercial SQL vendors with tight schedules tend to use a more ah-hoc process. The most popular method is rapid development followed by test-repair cycles. SQL test groups mainly use deterministic testing. They compose test scripts of SQL statements that cover individual features of the language and commonly used combinations of features. The scripts are continuously extended for added features and to verify bug fixes. If the test-repair cycles uncover particularly buggy areas, more detailed scripts for those areas are added. Typical SQL test libraries contain tens of thousands of statements and require an estimated person-hour per statement to compose. These test libraries cover an important, but minute, fraction of the SQL input domain. Stochastic testing can be used to increase the coverage. Stochastic techniques are used to mix several deterministic streams for concurrency tests and for scaled up load testing. Large increases in test coverage must come from automating the generation of tests. This paper describes a method to rapidly create a very large number of SQL statements without human intervention. The SQL statements are generated stochastically (or 'randomly') which provides the speed as well as wider coverage of the input domain. The challenge is to distribute the SQL statements in useful regions of the input domain. If the distribution is adequate, stochastic testing has the advantage that the quality of the tests improves as the test size increases [2]. A system called RAGS (Random Generation of SQL) was built to explore automated testing. RAGS is currently used by the Microsoft SQL Server[3] testing group. This paper describes how RAGS works and how it was evolved to be a more effective tool. We focus on positive tests in this paper, but mention other kinds of tests in the summary. Figure 1 illustrates the test coverage problem. Customers use the hexagon, bugs are in the oval, and the test libraries cover the shaded circle.  2. The RAGS System The RAGS approach is: Greatly enlarged the shaded circle in Figure 1 by stochastic SQL statement generation. Make all aspects of the generated SQL statements configurable. Experiment with configurations to maximize the bug detection rate. It is important to make all aspects of the generated SQL statement configurable so one can enlarge the reachable portion of the input domain and have a better shot at covering regions 1, 2, and 3 and beyond. RAGS is an experiment to see how effective a million fold increase in the size of a SQL test library can be. RAGS was used on a number of commercial SQL systems that run on Microsoft NT. As RAGS was built and used, it was necessary to add several features to increase the automation beyond SQL statement generation. RAGS can be used to drive one SQL system and look for observable errors such as lost connections, compiler errors, execution errors, and system crashes. The output of successful Select statements can be saved for regression testing. If a SQL Select executes without errors, there is no easy method to validate the returned values by observing only the values, the query, and the database state. Our approach is to simply execute the same query on multiple vendors DBMSs and then compare the results. First, the number of rows returned is compared and then, to avoid sorts, a special checksum over all the column values in all the rows is compared. The comparison method only works for SQL statements that will execute on more than one vendors database, such as entry level ANSI 92 compliant SQL[4]. The RAGS system is shown in Figure 2 below. A configuration file identifies one or more SQL systems and the SQL features to generate. The configuration file has several parameters for stochastic SQL generation: the frequency of occurrence of different statements (Select, Insert), limits (maximum number of tables in a join, maximum entries in a Group by list), and frequency of occurrence of features (outer join, Where, Group by). It also has execution parameters such as the maximum number of rows to fetch per query. The first step in running experiments on multiple systems is to ensure the databases are the same. They all must have identical schema and identical data in their tables. It is not necessary that they have the same set of indexes or other physical attributes. When the RAGS program is started, it first reads the configuration file. It then uses ODBC[5] to connect to the first DBMS and read the schema information. When the schema information is read, columns with data types that RAGS does not support (or is configured to not support) are ignored. A table is ignored if all its columns were ignored. RAGS will optionally list the schema it is working against and the configuration settings in its output report file. RAGS loops to generate SQL statements and optionally execute them. Statement generation is described in the next section. If the statement is executed on more than one system, the execution results are compared. For modification statements, the number of affected rows is compared. For Select statements, the number of result rows and a special checksum are compared. The checksum is formed by building a checksum for each row and then combining the row checksums in an order independent manner. For numeric fields, the precision is reduced to a configurable value before the checksum is formed. This avoids the problem of 1.999999 differing from 2. Columns with datetime types are problematic when comparing results so we added a configuration parameter to preclude the use of timestamp functions in the generated SQL statements. At the end of the run, RAGS produces a report containing errors found and statistics of the run. If several RAGS programs are run concurrently, a utility produces a reduced report that lists each distinct error code together with a count of the number of occurrences. RAGS can also be configured to run on just one database and include the row counts and checksums in the output report. The outputs from different runs at different times can be compared to find discrepancies. This is useful for regression tests. A typical SQL Select statement generated by RAGS is shown in Figure 3. SELECT T0.au_id , LTRIM(('cuIfzce' +T0.au_id )) FROM authors T0 WHERE NOT (NOT ((T0.au_fname )!= ANY ( SELECT '}E' FROM discounts T1, authors T2 WHERE NOT (('|K' )>= 'tKpc|AV' ) )) ) GROUP BY T0.au_id, T0.au_id Figure 3. Select statement generated by RAGS. The target database pertains to a publishing company that has tables for authors, books, stores, sales. The stochastic nature of the statement is most evident in the unusual character constants and in unnecessary constructs such as NOT NOT. RAGS also builds From lists, expressions, scalar functions, and subqueries stochastically but they appear less bizarre. Correlation names are used for tables to allow correlated column references. Constants are randomly generated (both length and content for character strings). RAGS uses parenthesis liberally, mostly to aid human recognition. A somewhat larger RAGS generated SQL Select statement is shown in Figure 4 below. This type of statement is sufficiently complex that it is not likely to be found in a deterministic test library. One might wonder how often stochastic Selects actually return rows of data. Our experience has been that about 50% of the non-error causing Select statements return rows while the remainder return no rows. The maximum join size together with the size of the database influences how many rows are returned. For large databases, it is important to timeout long running statements or keep the maximum join size and the maximum subquery depth quite low to preclude excessive run times or encountering system limits. SELECT TOP 2 '6o' , ((-1 )%(-(-(T1.qty ))))/(-(-2 )), (2 )+(T0.min_lvl ),'_^p:' FROM jobs T0, sales T1 WHERE ( ( (T0.job_id ) IS NOT NULL ) OR (('Feb 24 7014 10:47pm' )= ( SELECT DISTINCT 'Jun 2 5147 6:17am' FROM employee T2, titleauthor T3, jobs T4 WHERE ( T2.job_lvl BETWEEN (3 ) AND (((-(T4.max_lvl ))%((3 )-( -5 )))-(((-1 )/(T4.job_id ))%((3 )%(4 )))) ) OR (EXISTS ( SELECT DISTINCT TOP 7 MIN(LTRIM('Hqz6=14I' )), LOWER( MIN(T5.country )), MAX(REVERSE((LTRIM(REVERSE(T5.city ))+ LOWER('Iirl' )))), MIN(T5.city ) FROM publishers T5 WHERE EXISTS ( SELECT (T6.country +T6.country ), 'rW' , LTRIM( MIN(T6.pub_id )) FROM publishers T6, roysched T7 WHERE ( ( NOT (NOT (('2NPTd7s' ) IN ((LTRIM('DYQ=a' )+'4Jk`}A3oB' ), ( 'xFWU' +'6I6J:U~b' ), 'Q 10000) AND (department = sales) and the parse tree for the statement shown below in Figure 5. Given the parse tree, you could imagine a program that would walk the tree and print out the SQL text. RAGS is like that program except that it builds the tree stochastically as it walks it. RAGS takes a couple of detours, like generating the tables in the From list and the columns in the Group by list, before it begins outputting but it essentially make one pass of the tree. RAGS follows the semantic rules of SQL by carrying state information and directives on its walk down the tree and the results of stochastic outcomes as it walks up. For example, the datatype of an expression and being inside an aggregate expression, are carried down the expression tree and the name of a column reference that comprises an entire expression is carried up the tree. RAGS makes all its stochastic decisions at the last possible moment. When it needs to make a decision, such as selecting an element for an expression, it first analyzes the current state and directives and assembles a set of choices. Then it makes a stochastic selection from among the set of choices and it updates the state information for subsequent calls. On a 200Mhz Pentium and with moderate configuration (maximum join size 3, maximum subquery depth of 5, maximum expression depth of 6), RAGS can generate 833 SQL statements per second. The SQL statements average 12 lines and 550 bytes of text each. In one hour RAGS can generate 3 million different SQL statements of average size - more than contained in the combined test libraries of all SQL vendors. For limits testing, the sizes of expressions and predicates can be increased so that typical expressions or predicates contain hundreds of terms and span several pages. The lengths of Select, Group by, and Order by lists can also be increased. For concurrency tests, the maximum join size, subquery depth, and expression depth are set very low and the relative occurrence of Insert, Update, and Delete statements increased to get lots of small, simple data manipulation statements. Each SQL statement can be characterized by the configuration settings, the database schema, and the seed of the random number generator when statement generation began. For the same configuration and schema, if the same seed is supplied on a different run, the same statement will be generated. If RAGS is configured to execute the SQL statements and an error is found, RAGS records the seed for the statement along with the error message. If the statement runs without error, an output signature, consisting of the number of rows modified or fetched, together with the checksum for Selects can optionally be recorded for later comparisons with other systems or for regression testing. The starting seed for an entire RAGS run can be specified in the configuration file, allowing a given run to be repeated - it is not necessary to save the SQL text in the test library. If the starting seed is not specified, RAGS obtains a seed by hashing the time of day. 4. Testing Experiences The evolution of RAGS is driven by the needs of users. Many of the changes made do not deal with the stochastic SQL statement generation, but increase the degree of automation. Since the goal of RAGS is to run many orders of magnitude more statements through the system, if even a small percentage of the statements needs human attention, the total effort can be prohibitive. As each change to RAGS was made, more items were added to the configuration file which now contains almost 100 items. This section contains examples of RAGS output and enhancements made to improve usability. Tests were performed on systems from different vendors in order to validate output. To avoid specific vendor comparisons, vendor identities are suppressed. The systems are referred to as SYSA, SYSB, etc. and the mapping to actual vendors is changed from test to test. Additionally, the total size of the test database is less that 4KB, sufficiently small to preclude meaningful comparisons. Multi-user Test The output of a multi-user test on one system is shown in Figure 6 below. This was a concurrency test and used 10 clients, each generating and executing small Select, Insert, Update, and Delete statements on a single database system. ItemNumberNumber of clients10SQL statements per client2500Total number of statements25000Statements per transaction1 to 9Execution Results: Number of Selects6401 Number of Inserts6165 Number of Deletes6221 Number of Updates6213 Number of Transactions4997Execution with no errors21518 Errors - expected: Deadlock victim2715 Arithmetic error553 Character value too long196Errors - not expected (bugs) Error code 113 Error code 25Figure 6. RAGS output for 10 clients executing 2500 statements each on one system. Each of the 10 clients executed 2500 SQL statements in transactions that contained an average of 5 statements. Error counts are the number of times ODBC calls returned an error code. The errors are partitioned into errors expected from stochastic statements and those that are likely bugs. Expected errors include deadlock (with multi-user), arithmetic (overflow and divide by zero in random expressions), and character values too long (too many concatenations of fixed length values for insert or update). 86.1% of the statements executed without error, 13.8% had expected errors and 0.07% indicated possible bugs (18 occurrences of 2 different error codes). Comparison Tests The results of a comparison test between four systems are shown in Figure 7. The same 2000 random Select statements were run on each system. The numbers in each column reflect how that system's output compared to the output of the other three systems. The Comparison Case column enumerates the cases, with the dark circle representing the system of interest. The shaded ovals contain identical outputs. For example, the 15 in row 4 under system SYSB means that, for 15 statements, SYSB got the same output as one other system and the remaining two systems each got different outputs (or they got errors). The first row has the counts where all 4 systems agreed and the second row has the counts where the system agreed with two other systems. Counts in the fifth row, where the specified system got one answer and the other three systems all got a different answer (and all three agreed) is a great place to look for a bug. The fifth row also represents the situation where one system just has different behavior than the others (see the Validating Output section below). Database Connections RAGS uses ODBC to connect to a database and it reads the schema information directly using the SQLTables and SQLColumns calls of the ODBC API. The connection parameters for each type of vendor were added to the configuration file. We found that each vendor required a different mix of configuration options so these were added too. If a database connection is lost, RAGS trys to reconnect. If the reconnect fails it assumes the server is down and it aborts the run. If crashes occur, it is necessary to automate a rapid restart of the database server. The restarts were not done within RAGS, instead they were done in the macros that call RAGS to run the tests. Transactions When Insert, Update, and Delete statements were added to RAGS, the ability to generate transactions was also added. The maximum length of a transaction can be configured. If the maximum is set greater than 0, RAGS will pick the number of statements (not counting the Commit or Rollback) in each transaction uniformly between 1 and the maximum number. If the maximum is set to 0, the Auto Commit feature of ODBC is used and each statement is a separate, committed transaction. When transactions are enabled, the percentage of transaction that commit is configurable (the remaining transactions are rolled back). An interesting multi-user experiment is to set the percentage of commits to 0 and compare the database state before and after the runs (it should be the same since all transactions are rolled back). Automatic Statement Simplification When a RAGS generated statement caused an error, the debugging process was often difficult if the statement was long and complex, such as the example in Figure 4. It was discovered that the offending statement could usually be vastly simplified by hand. The simplification involved taking out as many elements of the statement as possible, while preserving the raising of the original error message. Note that the simplification process does not produce a statement that is equivalent to the original SQL statement. The simplified statement just results in the same error message (the error may not even be caused by the same bug, but that is unlikely). The simplification process itself was very tedious so RAGS was extended to optionally simplify the statement automatically. The statement in Figure 4 was one that caused a compiler error in one of the database systems. The RAGS simplified version of the statement is shown in Figure 8. To simplify a statement, RAGS walks a parse tree for the statement and tries to remove terms in expressions and certain clauses (Where and Having). Rags blanks out the expression or clause and reexecutes the statement. If the same error occurs, it continues its walk. If a different error or no error occurs, it replaces the blanked out text and continues. We found this simplification algorithm to be very effective so we did not extend the algorithm to be more complex. For example, RAGS does not attempt to remove elements in the Select, Group by, or Order by lists and it never backtracks on its walk to retry simplifications in a different order. Error Filtering An early version of RAGS would echo any error messages it found during statement execution in the output file. With long runs the output files became huge because the RAGS caused errors such as divide by zero and arithmetic overflow were numerous. RAGS was changed to simply count the number of these errors and only report the error text once. However, the bloated output file problem still occurred. When a configuration setting was found that caused errors from probable bugs, there would usually be lots of occurrences of the same errors in long runs. The solution was to extend the configuration file to include a list of error codes to ignore. Validating Output To validate the output of successful queries, the queries are executed on database systems from different vendors and the outputs are compared. Our first step was to add a configuration flag to RAGS to restrict the generated SQL to be compliant with ODBC Core SQL (a subset of entry level ANSI 92 SQL). It was found that the CORE subset of SQL was too small and excluded many important features in most SQL implementations. So features such as built-in functions were added. As the ANSI subset we supported was enlarged we quickly found several visible differences among vendors ODBC and SQL implementations. Examples of different behavior include: treating empty character strings as NULL, producing a NULL value by concatenating a character string with a NULL, and the number of rows returned from aggregating over an empty grouped table. Configuration options were added to RAGS to prevent construction of some of the SQL statements containing elements where vendors had different behavior. It was quite challenging to get a diverse set of SQL statements to run on several different systems. It eventually became necessary to specialize a RAGS run to the target SQL vendor so the configuration file grew to specify the type of SQL database system. This actually increased the automation significantly because we could quickly deal with problems without shrinking the domain of generated statements. For example, when execution by one vendor produced errors because of the default return types of scalar functions, we simply changed RAGS to cast the output to a different type for that vendor. Visualization When one wants to investigate the relationship between two metrics, such as the statement execution times of two systems, a set of sample pairs (execution times on both systems for a set of statements) is collected and analyzed. One might compute statistics such as the average of the ratio of execution times on both systems. RAGS presents an opportunity to scale up the size of such samples by several orders of magnitude. Not only does the scale up allow one to better analyze the relationship mathematically, it and also allows one to plot the sample points and visualize the relationship.  In one experiment the execution times for 931 Select statements was measured for systems SYSB and SYSD. The systems ran on different computers and the RAGS client was local for one run and remote for the other (thus, this is not a fair comparison). The average execution time of a statement on SYSD was 2.52 times that on system SYSB. The execution time on SYSD varied between 1/6 to 13 times that of SYSB. More interestingly, Figure 9 below shows a scatter plot of the execution times on the two systems. One can see that system SYSB is faster, except for the band of points at the bottom of the figure.  Another example, shown in Figure10, compares the execution times of two releases of the same system. With a few exceptions, the v2 release of SYSC is a little faster for the smaller queries and about the same for the larger ones. 5. Extensions RAGS is a project in progress. SQL coverage needs to be extended to more data types, more DDL, stored procedures, utilities, etc. Beyond that there are several possibilities for future work in stochastic SQL testing. First is a study of the effectiveness of this experiment: measure code coverage under random SQL testing and investigate bugs that customers see, but were not discovered by RAGS. The input domain can be extended to negative testing (injecting random errors in the generated SQL statements). Robustness tests can be performed by stochastically generating a whole family of equivalent SQL statements and comparing their outputs. Equivalent statements are obtained by permuting some operator operands and lists (From and Group by) and adding useless terms (AND in a TRUE term or multiply by a factor that evaluates to 1). Testing with equivalent statements has the important advantage of a method to validate the outputs. In the performance area, the optimizer estimates of execution metrics, together with the measured execution metrics, can be compared for millions of SQL statements. Although we avoided vendor performance comparisons in this paper, a more structured experiment, using a large random workload, would be interesting. 6. Summary RAGS is an experiment in massive stochastic testing of SQL systems. Its main contribution is to be able to generate entire SQL statements stochastically since this enables good coverage of the SQL input domain as well as rapid test generation. The problem of validating output remains a tough issue. The use of database systems from different vendors for output validation proved to be extremely useful for the SQL common to many vendors. The down side is that the common SQL subset is relatively small and changes with each release. We also found the differences in NULL and character string handling and numeric type coercion in expressions to be particularly problematic (these are also portability issues). The outcome of our experiment is encouraging since we found that RAGS can steadily generate errors in released SQL products. To make RAGS effective it was necessary to automate other steps of the testing process, notably positive result comparisons and statement simplification for error cases. 6. References [1] B.Beizer, "Software Testing Techniques," Van Nostrand Reinhold, New York, Second Edition, 1990. [2] P.Thevenod-Fosse, H. Waeselynch, Statemate applied to Statistical Software Testing, ISSTA 93, Proceedings of the 1993 International Symposium on Software Testing and analysis, pp 99-109. [3] Microsoft SQL Server Version 6.5, http://www.microsoft.com/sql. [4] ANSI X3.135-1992, American National Standard for Information Systems Database Language SQL, November, 1992. [5] Microsoft ODBC 3.0 SDK and Programmers Reference, Microsoft Press, February, 1997.  SQL testing procedures and bug counts are proprietary so there is little public information.  The database states are also compared at the end of the run. PAGE  PAGE 10  Figure 5: Parse tree for Select statement. SELECT TOP 2 '6o', -(-2), T0.min_lvl, '_^p: FROM jobs T0 , sales T1 WHERE EXISTS ( SELECT DISTINCT TOP 1 T1.ord_date, 'Jul 15 4792 4:16am FROM discounts T2, discounts T3 ORDER BY 2,1) Figure 8: RAGS simplified version of the statement in Figure 4. This statement causes the same error as the statement in Figure 4.  Figure 9: Relationship of execution times on two systems. The database is very small and the Selects are not complex.  Figure 7. Results of comparing the outputs of four database systems for 2000 Select statements. The numbers in row 5 indicate how many times this system got one result but the other three vendors all got a different result  EMBED Word.Picture.8  Figure 2: RAGS system. Several instances can be executed concurrently to represent multiple users.  Figure 1:SQL test library coverage should include at least region 2. Unfortunately, we dont know the actual region boundaries.  Figure 10: Relationship of 990 Select statement execution times on two versions of the same system. Version v2 is about as fast as version v1. ?@LMefhi #&(12ij A B Z [ t u . / OP;< jUmH j0JUNH6 CJhnH  6CJKH NHhnH 5CJCJCJhnH CJH*CJ jCJU jUIABLgh  !"#$$ABLgh  !"#&(2x y  \ !!<"m"}"¿A       =&(2x y  \ & F$GHUVijjkNO|}! " W X .!/!!!##$$%%&&['\'''O0X0T1U111$5%5 6 6>6?6666677-8.8j8k8::I:J:::5 j0JU jUmHNH^pq !!<"m"}"""""B#%%}"""""B#%%U&V&V(W(X((().)Y)))#*o*****F+++,*,w,,,5-Y-y---B....*/s////,0;0N0O0 1 1 1'1111133758566774858::@;A;<<==??AAABBBBBBBBBBBC ^%U&V&V(W(X((().)Y)))#*o*****F+++,*,w,,,5-Y-y-y---B....*/s////,0;0N0O0 1 1 1'111113375856%d 66774858::@;A;<<==??AAABBBB$ $d%d&d'd:;;<<<<@@HAIAAABBBDjEkEEE6F7FhFiFrGsGGGGGHHIIJJ3J4JJJrMsMMMDNENmPnPPPSSSSTTBUCUUUUUVVWW#X3XYYZZ[[\\]]]]^^___ _``aaOaPaa jUmHj5UmH5NH]BBBBBBBBC C!CM?MNNNtPuPQQQuTvTUU"X#X3XZZZ\\__aaafbgbrctceeeffff `DCWCXCYCmCrCsCCCCCCCCCCCCCCCDDDD)Dhhhh|ڀXd$$l t  N L$$$)D.D/DCDGDHDdDhDiDDDDDDDDDDEEGGGGJd݄|PL$$$l t  N L$$JJKKK>M?MNNNtPuPQQQuTvTUU"X#X3XZZZ\\__aaaafbgbrctceeeffffVhWhsjtjkkkllnnoooaarcscgeheeedgeghhhhHjIjjj:k;kkkmmooooppppqqrrBrCrNrOrrrrrrrrrrrrrrrrrrrssttt t(t~ttttt jU NHhnH  j9U 5hnH 5 jU0JCJ0JmH0J j0JU j0JUB*OJQJhnH hnH  jUmHNHEfVhWhsjtjkkkllnnooopp#qqqNrrrrrrrrrstt tttt|uvvww (opp#qqqNrrrrrrrrrrrrrrrrrstth&`#$ 1$L Ft tttttttt|u}uuuuuvvvvvwwwwwwww 1$L Fttt}u~uuuuuuuuuv v+v,vvvvvvvww NHhnH hnH  5hnH  jx7UNH j1U j4Uj8 UVmH jU5* 00P/R / =!"#$%. 00P/R / =!"#$% P* 00P/R / =!"#$%. 00P/R / =!"#$% P9Dd&<  C A2xTR"ߑp._D)`!}xTR"ߑp._j6 x^Po@Kx}hE~iFIR[N01*H6! Vk) UA#؏(P "䏦T**RsU?"D R`7{z{{|l r`p/ , 5p 7ݘ7T|A-t]~*YB-tmF`*4 U7Kڔ,I.TV*paMȗA1[n"J\i>%5d㯆o%V0/tѽjūֵcdM*@Ry2$?2yj01O{̩YnR*)ɓy'lVzeW_mL +iW:϶lQz$T㗔ټL:~wH+4 P Q{wX֮x'|ÒD]W~: R;#ZJd4ệL$\w"cR~/aHI7zIT+QFibBV$/zgz~ߍmzwD&מXfq41 L;VJZ]k̈́'f8rryt񡑄;0"ǡY,חȧ8˰Z{E;m/C.$.yA[Ԧp/oyϟ8'ȝ=jV0nun-k>ﵦ^ci͗+`(fRWC7o:U?¾4ioK><a6`̭Q)QaMi '1tX'; O^+PSk4`e4-sD*>"יk-QD~9"?(&"=7I={o1וehE]S#&)1)2ʾ.ދc΋++ŕыC7JW1"د)"-gIWt`n|pO4LӨ-Z(;#Fiyy)ιd#֜kVs*_- |JF\͙dW_垍+;h{ـĊۗ &_<7`čllAm  |RW;g{#];i`Kmہم[4j+=qkk [ϗ8 9MٝQHQVv =hzICb7$eˋM =7kH Dd&z<  C A2I WE=,wO/G/% })`! WE=,wO/G/I)" @g:_ xmlTU癙mŅUܭ)}ItKj~"HFhMȋib$3A.6YBykc—lvc0f j9sNߘ\?99sEA$Ń*Ƕ}`K-^z 1a; f4٪r=Z,4dG˩.}ZKc/3QU~vMҷV39\]eaַ[˽kg\t_IΥr/c2:t(o!įl)s&Ȗjҩ6eqMiar!ԧl^?+eMiyZbCŝRמֵui]sZWLi݈TFRחu=i]OZבuu:oi7Y{Uzt\_T[jqݒLuҺKi"-Jۢ3mQu͛)ƚ*}&պyEՓC+X%a~ru+6lذaÆ-{ 5m/$5CĆ 6lذaÆ ۍjJfY-5h.G?2oe<Æ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ ubOuijuyذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lزjJfY-o.ذݪ-5m[-կCĆ 6lذaÆ ۍWYzųugئ,7,mkxVli6ˈxV~"oTv,߉g}m5mmWgb ϢjI<.ԷŞg7> ^=v/x܍ 6lذaÆ 6lذavjvkI,j 2mgXyY-vyhvnvo6lu<9֧u,m^tgvwwa:]b[.NƐRϺ C2m;}F:mHu֞΄26vzYOτ26v/xY΄rvE5 HO˯ճ!fg;ڽYCv/elخCYC6elذaÆ x[:*g D(gg=grgmzkzW!sl5mfCQNS>Y:gvoyGL߼GhmW/ͳ܅^ O+X|k-VHcr+k?(O YfH˗'\~jx륕uvrS{Һ_yWxiYiMiegӲ;יhG y{Z;/% ޼ßjF1/wӒBXm*gY=z3#]Ցte^ݑ&ӆЧ'K-ҷp8F–NjIߔ׼V]5yr욇k~YY3=,͒kK֔TxlV׌;5Bu͸&=ɚ?dOq=}Jӧ+) =}Ju>kulOg7vL}K~ZB;K{d[eM+\7)sQY9khrYidYiw2zrum?ro?&r_'sRqh V!Y?cz3$VHl[}l{[˽oCnuqO1~(.Ȟ[mzRy*ΰ޵.k^֘%.]Z^ՐO0_e"Dd1x <  C A22ev| 4G5nV`!f2ev| 4G5N&}{H-?4xZ{lE^{}b[z6Ji)1ll 6H-E(kIAJPQjTD?$h4Zyr$1+(`'=Dr%ᰍ._HK(p[Onˉ ƬӅ .נЅg`uNvH}FY{rǍrZ{; h >R\6y9'ThnhU54׭g Q|4[^l5Ű~D*m-hEBFG2Љ1:6LMT ܱኮZf'Q<{[ɱ$ tAi2"M\ѶlGZtUf9Gd|ކ|W+,x'%}|C8nmߧza&1|٠[$%2g:W6i'Oۦۯ$>{i\ۣ[2Z,%YWLkxͻMe]!r֒ <(QǷbG^=ag]|O$9g6نD-:([4VRyyl^eXV >]Ӻy>_ɧcƆVL\/{Xyfv&2KEZ6󵁩l o2{Cw徆8?[tVcs5Ik~uk5= C4dtx冽N< 5k&σ΅ҹ~<Cu83 ]L^f-TER_WjC3TkZ ouujhwF+{D+]z]M=x|-q)h"Gׅ>n5{"ߞb#r TҼηeuY]kCkE5,Slr /u]l_IlwާidOmwoՆֶfظ]/٨t>P/#Z!Y3dDdB  S A? 29Z|U}O\Qx)`!9Z|U}O\QH|"&$pxZ tTE[% I:C@ 8 jЃaa' "IxIG@$ d% ̠GǙ?@ABCDEFGHIJKLMNOPQRSTUWXYZ[\]^_`abcdgjklmnopqrstuvwxyz{|}~Root Entry  F`9r~EƽqFƽData VLWordDocument ObjectPool0cFƽqFƽ_949295077 F0cFƽqhFƽ1TableiZCompObjhObjInfo [$@$NormalmH <A@<Default Paragraph Font$*>JVl.BOWjp!-A`      !"#$%&'()*-/;<=>NPY[$*>JVl.BOWjp!-A`  !"#$%&'()*+,-.    ldcd,2$oٖ*ƔTQc@Fbc8F( 2)2 r  6E r  6D r  6C r  6B r  6A r  6@ r   6? r   6> r   6 =  r   6 <  r   6 ;  r  6 :  B  3 9H  C I?8r  6 7  r  66 r  65 r  64 r  63 r  62 r  61 r  60 r  6/ r  6. r  6- r  6, <  # +r  6* r  6) r   6( r ! 6' r " 6& r # 6% r $ 6$ r % 6 #  r & 6!" !r ' 6"! "r ( 6#  #r ) 6$ $r * 6% %B + 3 I< , # r - 6& &B . 3 Ir / 6' 'T@ ,%-x' 2& 0 BrC DEHFTu`L$:+,4(6=F QZal ,:L`u9 v'9GTWbioprrlpaoZiQbFW=K6G49+'$v99v&&+ 180?>HC=:FGNRWYa^jfa[a]l][f^YRGPK>0 v9 ~iUC5(#,! la aja!W,N8F/=5HC?U8i1~+&&9'(@`@`,%-x' 1 BrClDE$F.II  &/1,8:ALH`OuUZ\ej9lvje\ZUO'H9AG8T1W/b&iop r][ f ^Y RGP&K>$0-4 <AFQFOvU9WUOFQF~Ai<U4C-5$(#&,!   @`,&-T&B 3 3 @ 'z+&/. :f 4 *BDCDEhFtv :DDvD D: v :: / /v:v:k k vv 78@`@`'z+&/.f 5 *BCDEhFt         78@`@`j'+.T. 6 zBCDEF  @`'|+s'+ 7 zBCDEF  @`'K.s'. 8 zBCDEF  @`.K.$/. 9 zBCDEF  @`.|+$/+r ; 6( (r < 6) )r = 6* *r > 6+ +@ +$"&# A ? zB7C0DEF7(07 @`p$"&# @ rBCDEFnn @`+$#$#r@ %) (e* E B zBChDEF hH @`%*'C* C rBCDEFH @`%)%@* D rBCDEF` @`'* (e*r@ %&'( I F zBCDEF @`%''( G rBCDEFLL @`%}(%( H rBCDEFh)h @`'&'F'@ "], ', L J zBCpDEFOp O @`"~,&, K rBCDEFL @`&], ',< M #  r N 6,  ,B O 3 I r P 6-  -T@ ,t)#.+ S & Q BqC DEHFTu_K$9++4'6=F QZal +9K_u9 v&8FSWainpqqlpanZiQaFW=J6F48+&$v99v&&+ 18/?=HC=:FFNQWXa^jeaZa\l\Ze^XQFOJ=/ v9 ~hTB4'"+  la aja W+N7F.=4HB?T8h1~+&&9'(@`@`,t)#.+ R BqClDE$F.II  &/1+89AKH_PuUZ\ek9lvke\ZUP&H8AF8S1W/a&inp q\Z e ^X QFO&J=$/-5 <AGQGPvU9WUPGQG~Ah<T5B-4$'"&+    @`,)#.)B T 3 @ W$%'& W U zB|CZDEFvZ|;vZ @`$%'& V rBCDEF4 @`W$%$%&< X # r Y 6. .B Z 3 Ir [ 6/ /T@ ,'.H) ^& \ BrC DEHFTu`L$:++4(6>G QZbl +:L`u9 v'9GTWbioprrlpboZiQbGW>K6G49+'$v99v&&+ 180?>HC>:GGNRWYb^kfb[b]l][f^YRGPK>0 v9 ~iUC4("+! lb bkb!W+N8G/>4HC?U8i1~+&&9'(@`@`,'.H) ] BrClDE$F.II  &/1+8:ALH`PuUZ\ek9lvke\ZUP'H9AG8T1W/b&iop r][ f ^Y RGP&K>$0-5 <AGQGPvU9WUPGQG~Ai<U5C-4$("&+!   @`,'.%(B _ 3 r@ %(') c ` zBC_DEFA _uA @`%('e) a rBCDEF @`%5)%) b rBCDEF77 @`'('(B S  ?  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEcJ t_ I t^C |E t[<  tZe tY Z tXr Z tWLtT I Y T tSS q  tP tOu tNK% tM tL;Z r tIJtE ub tAt>  SPt=  d t<  X: t;  t:mw t3 . t2( aut/k~ Bt.JA .t-gt,2t+ . t*g5 N t)g  t(gb s t'g  t&g+  t%gK t$g#t#gWt"gwt!g@t tg#tgmt2E tU]u tU wtUGi tU btU2 tUS Lt  tUW t  2tUW 7t  tUW t tB t2 t  Lt  t  7t  t | !t@  t t@ e tg t@ |t tGNVG@,@GTimes New Roman5Symbol3& Arial"qh"f"f!20  don slutz don slutz   FMicrosoft Word Picture MSWordDocWord.Picture.89qOh+'0$`l      In 1986, Gray and Putzolu observed that if a page were referenced more frequently than every five minutes, it maWordDocument SummaryInformation( DocumentSummaryInformation81TableG bjbjَ G]ZZZ$W-W-W-WWWWWWWW$zXnZNWZ-WN8-W-W-WWWGWWW-W4ZWn|-WWWWWvZWG<aW(W DBMS1= SQLA DBMS2= SQLB DBMS2= SQLC 1000000 Statements 65% Select 35% Update Max tables in join=5 Max subquery depth=6 Max Group by list=4 DBMS1= SQLA DBMS2= SQLB DBMS2= SQLC 1000000 Statements 65% Select 35% Update Max tables in join=5 Max subquery depth=6 Max Group by list=4 Read Configuration Connect to DBMSs Read Table Schema Loop Generate SQL stmt stochastically Execute SQL on DBMSA Execute SQL on DBMSB Execute SQL on DBMSC Compare results Record Errors Print Report RAGS Program SQL DBMS A 1000000 Statements Stmt 443: Error 11 on DBMSA Stmt 3551: Wrong results on DBMSC Stmt 6525: Error 18 on DBMSB Configuration File SQL DBMS C Report SQL DBMS B FGMOSU[]acikoq&(24HJ^`su&(9;JLXZfhrtB*CJhnH 5B*CJ hnH B*CJ hnH 5B*CJ hnH B*CJ hnH  jUmHQGNOTU\]bcjkpq      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGMNOSTU[\]abcijkopqd&'(234HIJ^_`stu&'(9:;JKLXYZfghrstd'(34IJ_`tu'(:;KLYZghstB*CJhnH B*CJ hnH   N N!"# $ %Oh+'0\   $ 0<DLTss don slutz on  Normal.dot don slutz2n Microsoft Word 8.0@@,Lj<@,Lj<՜.+,D՜.+,0 hp  MSFTitl   Title 6> _PID_GUIDAN{378F8D71-A7C2-11D1-9056-0080C7A98C7D}\EFLY19'+=[?=_4SpStF2T32< ΔwE `ƳDrgIӟ.CMw=v4`lflPբLUR{Cd dC>l\!H2 '\ E~ſ{^_Mf'O;$s1㋉M1a?&B53ɋf*f&&H^Y(] q7dzO_PƇAUϐ|BӦOKDTP[a:=:KWPnXs܇:du~BHZc>9N>ui+kuB4wu(,ԟ#͙)3ۙGb#]a!Ģf`kjXB:^m!nve.pJ2Baױh?-F-tS̓.tvmF;t?Qd[qK 4.v< {H]*38 u,Ey'B`~ʕ5 ؍ /,ֺi-p)/g.u\|f ;r)VeI4T-d4!i sޏg.f?ug^foܨܜEK 9ߔ5v<0*1;v`oOθ2NWAMJDvo{-P>JCWxG윅)k%>g?S %z>J9o%=`Wb{Dgmw~-S,j\XcKʶ)餖K-0TSY L)-0?+kLG<0bcAG QƼoތtlN΢*:&f11f'3lsQٙ-դ@p wL]sLkv[YN s|G?YeX½W+;5'kQzn#73/3~P&3$u<ߧm@ciTlԼٔmlN:*117s.?v'$#lޤFN,EF}5?^o!-DΔrt uVW!D<7P&WRכ~Ud8Htx֊%6iXFKrZ,^E"QD *BE1l]&J`-`l D+l".DD2ѕmѬV*#ٛ";;O~9wxףw6ƻqob⹁#Fy^F:xρp~ ;G=؛'y['(ϣ/E Vҋb510V=> *bhD,P^UV*k4 2ʋ"l"O|D+8W1,uvX4ۊxn_~֠wJWq 0 ( X -D/ P/B8VO1p|ksn [G-xEAf*c5\bfnj5({ѤPe` V* j $-mee{ND Q;Ȏ%ۑ-ې5[p-UP YVl+қJeiP׫z(s(Ș*dN52Uڂ1`mc;َ#ms lkf(QELUUr.PJ+ XLtp]7/,cxNż,t7$s [e#Ę`,DzzNY=v}J"p[! Wf0H 0"aEX7nku#zk"X'nD(2D U2|xD䊼4{ !x^lr/0OeoBWk +EVٖtf97xs tD[yoX(05#y<3ZC;gxV_bƻ5K=ؐm-ٸ/%e`Ǵܜ]/MH[67M*Y!CUV`&CiW[\Wįb-fR3-%3RMiJN|sb#e%n&>N/PVi_&z-Vj3PFvZF"V}lt&m#$y "{ A|wA|3٣\^7=(GXFg7MtIsYo=(ꩼnOk@ײ5f'0|A>ubNvN`PYoOSPէ}{.K@2sѮ]HZDu[sa22AkbGKop:x'߬w|cڕ^m35Mc 3YA+#a6ecF6k0o [Kٸ.z0fƿx)6?[ ;x"=!C?QV.b֋T֊QX{(| rq}D_i18`̳wvXDy_*~12]?Cc=;Ѹ7 >#4}>E_!!1c Ƙb &yxGBC7>f_}\ =v @I0&1ĢtS S<ʣCu}GX'1)}v FL#48<;)Nmxn3F? Z,AV"u(A8ǠmLS}-ƕU|(w3po]a/x :lZ[[ gDe I4dT $Ӏ`:2p2p&2p2p62p"" Dv3\=1Y22 Y3 πZӑQӔTdYmn>4G]TPNiPy T~*?CPy&Ɯg!وg6⚍f!ΙY`|&C'Pf TNS3 x':Z7lh~y.RɌzsVyڇi>Jϳ?qMa;n)!೓M[A/2>#V1coۀG~ÜXƳ*&y.)Dd"|<  C A2P]Y+wgF!,1)`!$]Y+wgF!`v [VXAxW_hE\^bdST r$m#w6wK[J>TCAb1E!}(C}r>L$KQٽ].I.|}37 Z8 աdжA׳c06"x2=Dp/̇y-^gnݟ!s9C4ֳefg]V^HVJ9L/hvnyZA>CY H62^DA㡍⯆/.;*Y~Ywl,W΋.W*^ɋF?/T :9(C`wJ egJvF+y~o} ]bMXrv&I9B92DUW\[)w$uUd_sMOp3.y٬܇ Ư cҼd47-NN#h [?K3T\= ''aACtvD9ڊ3P&{y8)ռGQ:eJNioiŜQFFo1o!v&=M#<N1b_VCQ'{xJ]dX $xvJ[&mFDdF!<  C A2`!C H7)`!`!C H2n88RXxlUuy㆑ڦb?2JfUMXBҤjw&iF",RviҐRhL%HhěxVlX @ I?`/u'W?Kޏwtު,>lySfG\/"ҩ"W9}Sdid]KΖ;%xJ对DJra1(׉A/kׇJ)3><5 q̕X _v9 ǸYkGk.MӒ}{T;s>L3:ZjhN ߺ_5d3W֎{]JYNt_J]Vu߯31j޶ /-UW;֏p9ZM J/nX Ε6zQN83߳קd,](Z 3\-пcn]ee|ZƊ7fٻwƬeW{?`sܖ! w>v>6Okc꿻>v-w%"TbԦMY'1ލg}޽U=[kږ?jtoS3x3G * z?muͭͭؑom-- u+ՅWRZ+>i"&E^RemZ׶պU˿/5Y.ijeLS&z~tr^#v?ؑeo֣yEK=2>6E2I->S! 胁+/=k+~-dXs_:~2R^mi8׀>h~r;K2-qszGNؾkRk@yTƐc,2ԯs _:ٛs3{scȅ(͹F I[N/89yX \#C{ODv\LnN(/ /8׈Fc|:hԹ,:My^#%۱h\CCr#ז'~rsXw0&>s ܜOr#cK7ܸޗ>8׈h@[)[){t8WYړ h&Gesmж@$_ߊX#qgYdlEͧ=Oli:%5RĶG_s݁>Nc0h\C Nk59׳kmrc9rs Xr_َtt\CSr";/p֍6.}BkntW xl\{#ˍOSrNYm9igg$)9b1la;.do;r/lo^Kѹ^8>8ݛ$72:׀t/O%/Ķ:q4k YllܼC|x59Ynd%{+^–7/d;/pF+Ydr>%#9דuJ/JI9\CzOMF[rq$딛R|K:MT?:y,7doöJ;3uKkGK9 Cȹ:%ǷO:M{}twSRZy?|r"#2'9*Skh ^49q}Oa;퇔!q7@˩+gTqc/r#dF6*6y>v}=Ss݁LYEW[3ZzD 4ϜEqq}?Pu8a]S86v~c7/}mo>ψ'j:3:׀iZOx-x=UF'[–!SuM)M9u%l Yr&/[ؾ@!\l*{-hɍg~zԹΙ:M@s@r !c#_3"zܙ{F!Dm\l4=^nF73Fr;a1_pY+g7r|#ˍ r:Frnn+7vM]@);/p:ۯe8׋Z;Ǝ!~/ޒ܊'7r a{,Nrl\6tgԱc.5m t?=u8Fb#);/ YǢ۠o5du˧N ^4ȵ%[nܵ|JC1S26vO5兢a#_J6oes-дw]d?ݭ[2m]8Q݋[͹&7y7{Sr|#Ⱦ@ i.S//@O ^#si~z&k~:ƒch\C{=:y,7vK%딜uWǴ#[zq!},д?9/UER?x,|M d{#ˍloI龏􃀍\p)9>_fXE&{\dz=SzLC}KAr>'N-_N7zi~퍬SR^tc&Wڧݛt -1$͛l7vxs h[y{@Yd?e:%7Nɾ@Րe,26nM~"asS4=hDq\ ]pu9cYn~&'זd{c[uJ7ד}7~ob޸ƕ@ 57qußbyBox^@:%ˍCRx]͹FMnN'{5#rJ"Yn\?%o\Oo =5 /LlenڧM~Jod_8V6lasZNsNپ)9'Bot/]'UՁg^퍌[OS ES2~q&KDk@ 45Vqú)d&u's ˍs@ƽ^ϖ&'ߟE)yO?v|ȹh~+[\oXv4N}]ͶLldڧOߺr^ %A7k$ܸyzz s t;[ [7zeC:e9g'}F\_`_wʍW[3q~'дCXޜki{]눶'N=tݹx=w残&ז:\[{K7fb(8Wn37<6Fȱ\#jG&\e},H9 y^OqzE)9\_4{#ϝOFos7MkA+:Mxr#!Ll?_Ź}*C:%T17ZX\e{v\[rːk$퍬C:{ܧĹtuV)wh;wEԭ\ݺ)uJƶ[ڣFއPt̓ql!y('A!uX4-=h>|\ݖO؛>+C:-8WF;N76{s]佬_ֽuV:ݟև)M~Z{J,l yP@OHC1o,9oNՐ׺2=gyq@o]x&;N[&TViغ=m'7/9^ 7}?:鳹s \iSnz9CY qr??{#__>y(*&7:lrs_):ks@gs@=S{EU7z)c/J^K 2ÖLl[LOsuͳ@F߿33ZzRV%88i[ѻ6O^y{XyfikUh6iBiM;Rŋ i쵠y|U>klh}moYֶumˊ5-ַߺfmi_r."S|jwjA>T}(ǚi*o6?]~دHy]jV4>}?4tt]?{W>`-_Zq _rnLR2D*/$ymW?SCcB gcryȴ -]粢S-8O%NM ;!˕)镙r,miPh׏e4籌6VjXP +u?^8k*^vok~ݱ~y GqAp^v;feh/Jcftg|ϛ2Ӵk߻׺4fV,kWwZ;bSummaryInformation( TDocumentSummaryInformation8 CompObjjde sense to keep the page in memory rather than reading it from disk each timen 1 Jim GrayGraim  Normal.dotadrs31Microsoft Word 8.0u@j=4@',>@Bؼ@ZxFƽZ՜.+,D՜.+, hp  MSFT,.Qo In 1986, Gray and Putzolu observed that if a page were referenced more frequently than every five minutes, it made sense to keep the page in memory rather than reading it from disk each time Title 6> _PID_GUIDAN{9958C587-8256-11D0-A4DF-CA60F8000000} [(@(Normal$mH L@L Heading 1$$<@&5CJKHOJQJF@F Heading 2$<@&56CJOJQJ@@@ Heading 3$<@& CJOJQJ0@0 Heading 4$@&54@4 Heading 5 $$@&5<A@<Default Paragraph FontHOHStyle1$(P l5CJOJQJkH.B@. Body Text $x:O:equation$@& !6,@",Header  !, @2,Footer  !&)@A& Page Number.@R. Footnote Text8&@a8Footnote ReferenceH*HOrHCode!$x$d!%d!&d!'d! CJOJQJ(U@( Hyperlink>*B*.P@. Body Text 2$$L`$Date$CJ(o(Author$CJAs_/0qPQghijks     /0qPQghijkn     (X$ -s , \ !:atw?CIPT%y-6BDC)DJaotw@BDFGHJLMNORS}"CfwAEKQ?Lehs'' '' !!!n:  ?2$9H2) EF 3#2$=,S03#2$u2tA&/ wGx5@ T(  H  #   H  #   B    H  #  H  #   B    H  #  B S  ?H0(   -EQr_as>4g4440 4$4N$4Xx4 _949295030qs@qsFKV]0 3 A%L%T&X&&&&&,'1'J'N'''))kkkk!l+l6l@lClLlllmnnns; B qx9>IM6 F M!V!!!##e$k$$$%%`%h%%%%%&&*&&&''J'O'''''4(7(((((((8)D)))))F*R******+>+A+++++4,:,H,K,,,'-5--...66;;GG[I`IrJzJMM*R5R=[G[N]]g^t^bbccggikokmmmmmnnnoopps don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.doc don slutz&C:\drs\Projects\RagsDocs\RAGS_VLDB.docdrs$C:\drs\Projects\RagsDocs\RAGS_TR.docc# 3& *50 ?1 6^ *hh. hhOJQJo( hhOJQJo( hhOJQJo(hh.3&?1c#6^*50 @h OJQJo(@,~s@GTimes New Roman5Symbol3& Arial?5 Courier New"qhrF d(fԢ"Z.20dQoIn 1986, Gray and Putzolu observed that if a page were referenced more frequently than every five minutes, it made sense to keep the page in memory rather than reading it from disk each timeJim Graydrs  FMicrosoft Word Document MSWordDocWord.Document.89q