Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/V3HighLevelDesign.md
Original file line number Diff line number Diff line change
Expand Up @@ -782,7 +782,7 @@ Recapitulating what we’ve covered so far:

Once we start allowing joins and subqueries, we have a whole bunch of table aliases and relationships to deal with. We have to contend with name clashes, self-joins, as well as scoping rules. In a way, the vschema has acted as a static symbol table so far. But that’s not going to be enough any more.

The core of the symbol table will contain a map whose key will be a table alias, and the elements will be [similar to the table in vschema](https://github.com/vitessio/vitess/blob/master/go/vt/vtgate/planbuilder/schema.go#L22). However, it will also contain a column list that will be built as the query is parsed.
The core of the symbol table will contain a map whose key will be a table alias, and the elements will be [similar to the table in vschema](https://github.com/vitessio/vitess/blob/0b3de7c4a2de8daec545f040639b55a835361685/go/vt/vtgate/vindexes/vschema.go#L82). However, it will also contain a column list that will be built as the query is parsed.

### A simple example

Expand Down
2 changes: 1 addition & 1 deletion doc/VitessQueues.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ capabilities, the usual horizontal sharding process can be used.

Queue Tables are marked in the schema by a comment, in a similar way we detect
Sequence Tables
[now](https://github.com/vitessio/vitess/blob/master/go/vt/tabletserver/table_info.go#L37).
[now](https://github.com/vitessio/vitess/blob/0b3de7c4a2de8daec545f040639b55a835361685/go/vt/vttablet/tabletserver/tabletserver.go#L138).

When a tablet becomes a master, and there are Queue tables, it creates a
QueueManager for each of them.
Expand Down
2 changes: 1 addition & 1 deletion java/hadoop/src/main/java/io/vitess/hadoop/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ primary key (id)) Engine=InnoDB;

Let's say we want to write a MapReduce job that imports this table from Vitess to HDFS where each row is turned into a CSV record in HDFS.

We can use [VitessInputFormat](https://github.com/vitessio/vitess/blob/master/java/hadoop/src/main/java/io/vitess/hadoop/VitessInputFormat.java), an implementation of Hadoop's [InputFormat](https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/InputFormat.html), for that. With VitessInputFormat, rows from the source table are streamed to the mapper task. Each input record has a [NullWritable](https://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/io/NullWritable.html) key (no key, really), and [RowWritable](https://github.com/vitessio/vitess/blob/master/java/hadoop/src/main/java/io/vitess/hadoop/RowWritable.java) as value, which is a writable implementation for the entire row's contents.
We can use [VitessInputFormat](https://github.com/vitessio/vitess/blob/master/java/hadoop/src/main/java/io/vitess/hadoop/VitessInputFormat.java), an implementation of Hadoop's [InputFormat](https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/InputFormat.html), for that. With VitessInputFormat, rows from the source table are streamed to the mapper task. Each input record has a [NullWritable](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/NullWritable.html) key (no key, really), and [RowWritable](https://github.com/vitessio/vitess/blob/master/java/hadoop/src/main/java/io/vitess/hadoop/RowWritable.java) as value, which is a writable implementation for the entire row's contents.

Here is an example implementation of our mapper, which transforms each row into a CSV Text.

Expand Down