From 5f2209bdcabba0507359cb4eb1d86293e9faf635 Mon Sep 17 00:00:00 2001 From: Oliver Kennedy Date: Tue, 3 Jan 2017 08:05:23 -0500 Subject: [PATCH] Deleting old text from checkpoints --- .../{checkpoint0.html => checkpoint0.erb} | 14 +- src/teaching/cse-562/2017sp/checkpoint1.erb | 11 + src/teaching/cse-562/2017sp/checkpoint1.html | 235 ------------------ src/teaching/cse-562/2017sp/checkpoint2.erb | 11 + src/teaching/cse-562/2017sp/checkpoint2.html | 193 -------------- src/teaching/cse-562/2017sp/checkpoint3.erb | 11 + src/teaching/cse-562/2017sp/checkpoint3.html | 209 ---------------- 7 files changed, 41 insertions(+), 643 deletions(-) rename src/teaching/cse-562/2017sp/{checkpoint0.html => checkpoint0.erb} (92%) create mode 100644 src/teaching/cse-562/2017sp/checkpoint1.erb delete mode 100644 src/teaching/cse-562/2017sp/checkpoint1.html create mode 100644 src/teaching/cse-562/2017sp/checkpoint2.erb delete mode 100644 src/teaching/cse-562/2017sp/checkpoint2.html create mode 100644 src/teaching/cse-562/2017sp/checkpoint3.erb delete mode 100644 src/teaching/cse-562/2017sp/checkpoint3.html diff --git a/src/teaching/cse-562/2017sp/checkpoint0.html b/src/teaching/cse-562/2017sp/checkpoint0.erb similarity index 92% rename from src/teaching/cse-562/2017sp/checkpoint0.html rename to src/teaching/cse-562/2017sp/checkpoint0.erb index 2e5d9fa6..7940b704 100644 --- a/src/teaching/cse-562/2017sp/checkpoint0.html +++ b/src/teaching/cse-562/2017sp/checkpoint0.erb @@ -1,6 +1,9 @@ +--- +title: CSE-562; Project 0 +---

The Submission System

@@ -93,7 +96,6 @@ A snapshot of your repository will be taken, and your entire group will receive
  • Validate the output.
  • If these steps fail for any reason, your submission will receive a 0 and you will need to resubmit. A log of the testing process will be made available on the submission page so that you may correct any errors that occur. -

    Project: Hello World!

    -Create a class edu.buffalo.cse562.Main with a main function that that prints out the following (with no newlines) and exits. -
    We, the members of our team, agree that we will not submit any code that we have not written ourselves, share our code with anyone outside of our group, or use code that we have not written ourselves as a reference.
    -Make sure your class compiles, push your (committed) repository, and hit Submit. + +

    Project: A Database Hello World!

    +TBD \ No newline at end of file diff --git a/src/teaching/cse-562/2017sp/checkpoint1.erb b/src/teaching/cse-562/2017sp/checkpoint1.erb new file mode 100644 index 00000000..2ee1a5a9 --- /dev/null +++ b/src/teaching/cse-562/2017sp/checkpoint1.erb @@ -0,0 +1,11 @@ + diff --git a/src/teaching/cse-562/2017sp/checkpoint1.html b/src/teaching/cse-562/2017sp/checkpoint1.html deleted file mode 100644 index 2fe0b9b9..00000000 --- a/src/teaching/cse-562/2017sp/checkpoint1.html +++ /dev/null @@ -1,235 +0,0 @@ - -In this project, you will implement a simple SQL query evaluator with support for Select, Project, Join, Bag Union, and Aggregate operations.  You will receive a set of data files, schema information, and be expected to evaluate multiple SELECT queries over those data files. - -Your code is expected to evaluate the SELECT statements on provided data, and produce output in a standardized form. Your code will be evaluated for both correctness and performance (in comparison to a naive evaluator based on iterators and nested-loop joins). -

    Parsing SQL

    -A parser converts a human-readable string into a structured representation of the program (or query) that the string describes. A fork of the JSQLParser open-source SQL parser (JSQLParser) will be provided for your use.  The JAR may be downloaded from -

    http://odin.cse.buffalo.edu/resources/jsqlparser/jsqlparser.jar

    -And documentation for the fork is available at -

    http://odin.cse.buffalo.edu/resources/jsqlparser

    -You are not required to use this parser (i.e., you may write your own if you like). However, we will be testing your code on SQL that is guaranteed to parse with JSqlParser. - -Basic use of the parser requires a java.io.Reader or java.io.InputStream from which the file data to be parsed (For example, a java.io.FileReader). Let's assume you've created one already (of either type) and called it inputFile. -
    CCJSqlParser parser = new CCJSqlParser(inputFile);
    -Statement statement;
    -while((statement = parser.Statement()) != null){
    -  // `statement` now has one of the several 
    -  // implementations of the Statement interface
    -}
    -// End-of-file.  Exit!
    -At this point, you'll need to figure out what kind of statement you're dealing with. For this project, we'll be working with Select and CreateTable. There are two ways to do this. JSqlParser defines a Visitor style interface that you can use if you're familiar with the pattern. However, my preference is for the simpler and lighter-weight instanceof relation: -
    if(statement instanceof Select) {
    -  Select selectStatement = (Select)statement;
    -  // handle the select
    -} else if(statement instanceof CreateTable) {
    -  // and so forth
    -}
    -

    Example

    - -

    Expressions

    -JSQLParser includes an object called Expression that represents a primitive-valued expression parse tree.  In addition to the parser, we are providing a collection of classes for manipulating and evaluating Expressions.  The JAR may be downloaded from -

    http://odin.cse.buffalo.edu/resources/expressionlib/expression.jar

    -

     Documentation for the library is available at

    -

    http://odin.cse.buffalo.edu/resources/expressionlib

    -

    To use the Eval class, you will need to define a method for dereferencing Column objects.  For example, if I have a Map called tupleSchema that contains my tuple schema, and an ArrayList called tuple that contains the tuple I am currently evaluating, I might write:

    - -
    public void LeafValue eval(Column x){
    -  int colID = tupleSchema.get(x.getName());
    -  return tuple.get(colID);
    -}
    -

    After doing this, you can use Eval.eval() to evaluate any expression in the context of tuple.

    - -

    Source Data

    -Because you are implementing a query evaluator and not a full database engine, there will not be any tables -- at least not in the traditional sense of persistent objects that can be updated and modified. Instead, you will be given a Table Schema and a CSV File with the instance in it. To keep things simple, we will use the CREATE TABLE statement to define a relation's schema. You do not need to allocate any resources for the table in reaction to a CREATE TABLE statement -- Simply save the schema that you are given for later use. Sql types (and their corresponding java types) that will be used in this project are as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    SQL TypeJava Equivalent
    stringStringValue
    varcharStringValue
    charStringValue
    intLongValue
    decimalDoubleValue
    dateDateValue
    -In addition to the schema, you will be given a data directory containing multiple data files who's names correspond to the table names given in the CREATE TABLE statements. For example, let's say that you see the following statement in your query file: -
    CREATE TABLE R(A int, B int, C int);
    -That means that the data directory contains a data file called 'R.dat' that might look like this: -
    1|1|5
    -1|2|6
    -2|3|7
    -Each line of text (see java.io.BufferedReader.readLine()) corresponds to one row of data. Each record is delimited by a vertical pipe '|' character.  Integers and floats are stored in a form recognized by Java’s Long.parseLong() and Double.parseDouble() methods. Dates are stored in YYYY-MM-DD form, where YYYY is the 4-digit year, MM is the 2-digit month number, and DD is the 2-digit date. Strings are stored unescaped and unquoted and are guaranteed to contain no vertical pipe characters. -

    Queries

    -Your code is expected to support both aggregate and non-aggregate queries with the following features.  Keep in mind that this is only a minimum requirement. - -

    Output

    -Your code is expected output query results in the same format as the input data: - -

    Example Queries and Data

    -These are only examples.  Your code will be expected to handle these queries, as well as others. - -Sanity Check Examples: A thorough suite of test cases covering most simple query features. - -Example NBA Benchmark Queries: Some very simple queries to get you started. - -The TPC-H Benchmark: This benchmark consists of two parts: DBGen (generates the data) and a specification document (defines the queries).  A nice summary of the TPC-H queries can be found here. - -The SQL implementation used by TPC-H differs in a few subtle ways from the implementation used by JSqlParser.  Minor structural rewrites to the queries in the specification document will be required: - -Queries that conform to the specifications for this project include: Q1, Q3, Q5, Q6, Q8*, Q9, Q10, Q12*, Q14*, Q15*, Q19* (Asterisks mean that the query doesn't meet the spec as written, but can easily be rewritten into one that does) - -

    Code Submission

    -As before, all .java files in the src directory at the root of your repository will be compiled (and linked against JSQLParser). Also as before, the class -
       edu.buffalo.cse562.Main
    -
    -will be invoked with the following arguments: - -For example: -
    $> ls data
    -R.dat
    -S.dat
    -T.dat
    -$> cat R.dat
    -1|1|5
    -1|2|6
    -2|3|7
    -$> cat query.sql
    -CREATE TABLE R(A int, B int, C int)
    -SELECT B, C FROM R WHERE A = 1
    -$> java -cp build:jsqlparser.jar edu.buffalo.cse562.Main --data data query.sql
    -1|5
    -2|6
    -
    -Once again, the data directory contains files named table name.dat where table name is the name used in a CREATE TABLE statement. Notice the effect of CREATE TABLE statements is not to create a new file, but simply to link the given schema to an existing .dat file. These files use vertical-pipe (’|’) as a field delimiter, and newlines (’\n’) as record delimiters. - -The testing environment is configured with the Sun JDK version 1.8. -

    Grading

    -Your code will be subjected to a sequence of test cases, most of which are provided in the project code (though different data will be used). Two evaluation phases will be performed. Phase 1 will be performed on small datasets (< 100 rows per input table) and each run will be graded on a per-test-case basis as follows: - -Phase 2 will evaluate your code on more complex queries that create large intermediate states (100+ MB). Queries for which your submission does not produce correct output, or which your submission takes over 1 minute to process will receive an F. Otherwise, your submission will be graded on the runtime of each test as follows - -Your overall project grade will be a weighted average of the individual components.  It will be possible to earn extra credit by beating the reference implementation. - -Additionally, there will be a per-query leader-board for all groups who manage to beat the reference implementation. diff --git a/src/teaching/cse-562/2017sp/checkpoint2.erb b/src/teaching/cse-562/2017sp/checkpoint2.erb new file mode 100644 index 00000000..290c3b53 --- /dev/null +++ b/src/teaching/cse-562/2017sp/checkpoint2.erb @@ -0,0 +1,11 @@ + diff --git a/src/teaching/cse-562/2017sp/checkpoint2.html b/src/teaching/cse-562/2017sp/checkpoint2.html deleted file mode 100644 index 94cc48f2..00000000 --- a/src/teaching/cse-562/2017sp/checkpoint2.html +++ /dev/null @@ -1,193 +0,0 @@ - -

    This project is, in effect, a more rigorous form of Project 1. The requirements are identical: We give you a query and some data, you evaluate the query on the data and give us a response as quickly as possible.

    -

    First, this means that we'll be expecting a more feature-complete submission. Your code will be evaluated on more queries from TPC-H benchmark, which exercises a broader range of SQL features than the Project 1 test cases did.

    -

    Second, performance constraints will be tighter. The reference implementation for this project has been improved over that of Project 1, meaning that you'll be expected to perform more efficiently, and to handle data that does not fit into main memory.

    - - -
    - -

    Join Ordering

    -

    The order in which you join tables together is incredibly important, and can change the runtime of your query by multiple orders of magnitude.  Picking between different join orderings is incredibly important!  However, to do so, you will need statistics about the data, something that won't really be feasible until the next project.  Instead, here's a present for those of you paying attention.  The tables in each FROM clause are ordered so that you will get our recommended join order by building a left-deep plan going in-order of the relation list (something that many of you are doing already), and (for hybrid hash joins) using the left-hand-side relation to build your hash table.

    - -

    Blocking Operators and Memory

    -

    Blocking operators (e.g., joins other than Merge Join, the Sort operator, etc...) are generally blocking because they need to materialize instances of a relation. For half of this project, you will not have enough memory available to materialize a full relation, to say nothing of join results. To successfully process these queries, you will need to implement out-of core equivalents of these operators: At least one External Join (e.g., Block-Nested-Loop, Hash, or Sort/Merge Join) and an out-of-core Sort Algorithm (e.g., External Sort).

    -

    For your reference, the evaluation machines have 2GB of memory.  In phase 2,  Java will be configured for 100 MB of heap space (see the command line argument -Xmx).  To work with such a small amount of heap space, you will need to manually invoke Java's garbage collector by calling System.gc().  How frequently you do this is up to you.  The more you wait, the greater the chance that you'll run out of memory.  The reference implementation calls it in the Two-Phase sort operator, every time it finishes flushing a file out to disk. 

    - -

    Query Rewriting

    -

    In Project 1, you were encouraged to parse SQL into a relational algebra tree.  Project 2 is where that design choice begins to pay off.  We've discussed expression equivalences in relational algebra, and identified several that are always good (e.g., pushing down selection operators). The reference implementation uses some simple recursion to identify patterns of expressions that can be optimized and rewrite them.  For example, if I wanted to define a new HashJoin operator, I might go through and replace every qualifying Selection operator sitting on top of a CrossProduct operator with a HashJoin.

    - -
    if(o instanceof Selection){
    -  Selection s = (Selection)o;
    -  if(s.getChild() instanceof CrossProduct){
    -    CrossProduct prod = 
    -       (CrossProduct)s.getChild();
    -    Expression join_cond = 
    -       // find a good join condition in 
    -       // the predicate of s.
    -    Expression rest =      
    -       // the remaining conditions
    -    return new Selection(
    -      rest, 
    -      new HashJoin(
    -        join_cond, 
    -        prod.getLHS(), 
    -        prod.getRHS()
    -      )
    -    );
    -  }
    -}
    -return o;
    -

    The reference implementation has a function similar to this snippet of code, and applies the function to every node in the relational algebra tree.

    -

    Because selection can be decomposed, you may find it useful to have a piece of code that can split AndExpressions into a list of conjunctive terms:

    - -
    List<Expression> splitAndClauses(Expression e) 
    -{
    -  List<Expression> ret = 
    -     new ArrayList<Expression();
    -  if(e instanceof AndExpression){
    -    AndExpression a = (AndExpression)e;
    -    ret.addAll(
    -      splitAndClauses(a.getLeftExpression())
    -    );
    -    ret.addAll(
    -      splitAndClauses(a.getRightExpression())
    -    );
    -  } else {
    -    ret.add(e);
    -  }
    -}
    -

    Interface

    -

    Your code will be evaluated in exactly the same way as Project 1.  Your code will be presented with a 1GB (SF 1) TPC-H dataset.  Grading will proceed in two phases.  In the first phase, you will have an unlimited amount of memory, but very tight time constraints.  In the second phase, you will have slightly looser time constraints, but will be limited to 100 MB of memory, and presented with either a 1GB or a 200 MB (SF 0.2) dataset.

    -

    As before, your code will be invoked with the data directory and the relevant SQL files. An additional parameter will be used in Phase 2:

    - - -
    java -cp build:jsqlparser.jar 
    -     -Xmx100m      # Heap limit (Phase 2 only)
    -     edu.buffalo.cse562.Main 
    -     --data [data] 
    -     --swap [swap] 
    -     [sqlfile1] [sqlfile2] ...
    -This example uses the following directories and files: - -

    Grading

    -

    Your code will be subjected to a sequence of test cases and evaluated on speed and correctness.  Note that unlike Project 1, you will neither receive a warning about, nor partial credit for out-of-order query results if the outermost query includes an ORDER BY clause.

    -

    Phase 1 (big queries) will be graded on a TPC-H SF 1 dataset (1 GB of raw text data).  Phase 2 (limited memory) will be graded on either a TPC-H SF 1 or SF 0.2 (200 MB of raw text data) dataset as listed in the chart below.  Grades are assigned based on per-query thresholds:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    TPC-H QueryPhase 1 RuntimesPhase 2 RuntimesPhase 2 Scaling Factor
    145 sA1 minSF = 0.2
    67.5 sB2 min
    90 sC3 min
    345 sA40 sSF = 0.2
    90 sB80 s
    120 sC120 s
    545 sA70 sSF = 0.2
    90 sB140 s
    120 sC210 s
    1045 sA2 minSF = 1
    67.5 sB4 min
    90 sC6 min
    1245 sA1.5 minSF = 1
    67.5 sB3 min
    90 sC4.5 min
    diff --git a/src/teaching/cse-562/2017sp/checkpoint3.erb b/src/teaching/cse-562/2017sp/checkpoint3.erb new file mode 100644 index 00000000..3f5d22da --- /dev/null +++ b/src/teaching/cse-562/2017sp/checkpoint3.erb @@ -0,0 +1,11 @@ + diff --git a/src/teaching/cse-562/2017sp/checkpoint3.html b/src/teaching/cse-562/2017sp/checkpoint3.html deleted file mode 100644 index cc842871..00000000 --- a/src/teaching/cse-562/2017sp/checkpoint3.html +++ /dev/null @@ -1,209 +0,0 @@ - -

    Once again, we will be tightening performance constraints.  You will be expected to complete queries in seconds, rather than tens of seconds as before.  This time however, you will be given a few minutes alone with the data before we start timing you.

    -

    Concretely, you will be given a period of up to 5 minutes that we'll call the Load Phase.  During the load phase, you will have access to the data, as well as a database directory that will not be erased in between runs of your application.  Example uses for this time include building indexes or  gathering statistics about the data for use in cost-based estimation.

    -

    Additionally, CREATE TABLE statements are now annotated with PRIMARY KEY and FOREIGN KEY attributes.  You may hardcode index selections for the TPC-H benchmark based on your own experimentation.

    - - -
    - -

    BerkeleyDB

    -

    For this project, you will get access to a new library: BerkeleyDB (Java Edition).  Don't let the name mislead you, BDB is not actually a full database system.  Rather, BDB implements the indexing and persistence layers of a database system.  Download BDB at:

    -

    http://odin.cse.buffalo.edu/resources/berkeleydb/berkeleydb.jar

    -

    The BerkeleyDB documentation is mirrored at:

    -

    http://odin.cse.buffalo.edu/resources/berkeleydb/

    -

    You can find a getting started guide at:

    -

    http://odin.cse.buffalo.edu/resources/berkeleydb/GettingStartedGuide

    -And the javadoc at: -

    http://odin.cse.buffalo.edu/resources/berkeleydb/java/

    -

    BDB can be used in two ways: The Direct Persistence layer, and the Base API.  The Direct Persistence Layer is easier to use at first, as it handles index management and serialization through compiler annotations.  However, this ease comes at the cost of flexibility.  Especially if you plan to use secondary indexes, you may find it substantially easier to work with the Base API.  For this reason, this summary will focus on the Base API.

    - -

    Environments and Databases

    -

    A relation or table is represented in BDB as a Database, which is grouped into units of storage called an Environment.  The first thing that you should to do in the pre-computation phase is to create an Environment and one or more Databases.  Be absolutely sure to close both the environment and the database before you exit, as not doing so could lead to file corruption.

    -

    BDB Databases are in effect clustered indexes, which means that every record stored in one is identified (and sorted by) a key.  A database supports efficient access to records or ranges of records based on their keys.

    - -

    Representing, Storing, and Reading Tuples

    -

    Every tuple must be marked with a primary key, and may include one or more secondary keys.  In the Base API, both the value and its key are represented as a string of bytes.  Both key and value must be stored as a byte array encapsulated in a DatabaseEntry object.  Secondary Keys are defined when creating a secondary index.

    -

    Note that you will need to manually extract the key from the rest of the record and write some code to serialize the record and the key into byte arrays.  You could use toString(), but you may find it substantially faster to use Java's native object serialization:

    -

    ObjectOutputStream  |  ObjectInputStream

    -

    ... or a pair of classes that java provides for serializing primitive data:

    -

    DataOutputStream  |  DataInputStream

    -

    Like a Hash-Map, BDB supports a simple get/put interface.  Tuples can be stored or looked up by their key.  Like your code, BDB also provides an iterator interface called a Cursor.  Of note, BDB's cursor interface supports index lookups.

    - -

    Secondary Indexes

    -

    The Database represents a clustered index.  In addition, BDB has support for unclustered indexes, which it calls SecondaryDatabases. As an unclustered index, a secondary database doesn't dictate how the tuples themselves are laid out, but still allows for (mostly) efficient lookups for secondary "keys".  The term "keys" is in quotation marks, because unlike the primary key used in the primary database, a secondary database allows for multiple records with the same secondary key.

    -

    To automate the management process, a secondary index is defined using an implementation of SecondaryKeyCreator.  This class should map record DatabaseEntry objects to a (not necessarily unique) DatabaseEntry object that acts as a secondary key.

    - -

    BDB Joins

    -

    Another misnomer, BDB allows you to define so-called Join Cursors. This is not a relational join in the traditional sense.   Rather, a Join Cursor allows you to define multiple equality predicates over the base relation and scan over all records that match all of the specified lookup conditions.

    - -

    Performance Tuning

    -

    BerkeleyDB can be quite tricky to get performance out of.  There are a number of options, and ways of interacting with it that can help you get the most out of this indexing software.  Since evaluation on the grading boxes takes time due to the end-to-end testing process, I encourage you to evaluate on your own machines.  For best results, be sure to store your database on an HDD (Results from SSDs will not be representative of the grading boxes).  Recall that the grader boxes have 4 GB of RAM.

    - -

    Heap Scans

    -

    Depending on how you've implemented deserialization of the raw data files, you may find it faster to read directly from the clustered index rather than from the data file.  In the reference implementation, reading from a clustered index is about twice as fast as from a data file, but this performance boost stems from several factors.  If you choose to do this, take a look at DiskOrderedCursor, which my experiments show is roughly about twice as fast as a regular in-order Cursor on an HDD on a fully compacted relation.

    - -

    Locking Policies

    -

    Locking is slow.  Consistency is slow.  As long as you're not implementing your code multithreaded or with updates or transactions, you'll find that cursor operations will be faster under LockMode.READ_UNCOMMITTED.  See below for ways to set this parameter globally.

    - -

    Config Options

    -

    BDB also has numerous options that will affect the performance of your system.  Several options you may wish to evaluate, both for the load and run phases:

    - - - -
    - -

    Interface

    -

    Your code will be evaluated in exactly the same way as Projects 1 and 2.  Your code will be presented with a 500MB (SF 0.5) TPC-H dataset.  Before grading begins, your code will be run once to preprocess the data.  You will have up to 5 minutes, after which your process will be killed (if it has not yet terminated).  Your code will then be run on the test suite.

    -

    As before, your code will be invoked with the data directory and the relevant SQL files. Two additional parameters will be used in the preprocessing stage:

    - - -
    java -cp build:jsqlparser.jar:...
    -     edu.buffalo.cse562.Main 
    -     --data [data] 
    -     --db   [db] 
    -     --load
    -     [sqlfile1] [sqlfile2] ...
    -This example uses the following directories and files: - -

    Grading

    -

    Your code will be subjected to a sequence of test cases and evaluated on speed and correctness.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    TPC-H QueryGradeMaximum Run-Time (s)
    Q1A30
    B60
    C90
    Q3A5
    B30
    C120
    Q5A10
    B60
    C120
    Q6A20
    B45
    C70
    Q10A10
    B30
    C90
    Q12A40
    B60
    C90