Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
2d0e5c1
validate half float values
fred84 Jul 12, 2017
c0fff6f
Merge branch 'master' into 25534_reject_out_of_range_numbers
fred84 Jul 13, 2017
6e0a6ea
test upper bound for numeric mapper
fred84 Jul 17, 2017
7d4f315
merge master
fred84 Jul 21, 2017
d1ebd6f
test for upper bound for float, double and half_float
fred84 Jul 21, 2017
1358fed
Merge branch 'master' into 25533_reject_out_of_range_numbers
fred84 Jul 22, 2017
44983d8
more tests on NaN and Infinity for NumberFieldMapper
fred84 Jul 23, 2017
d072e69
Merge branch 'master' into 25534_reject_out_of_range_numbers
fred84 Jul 23, 2017
ecf3424
fix checkstyle errors
fred84 Jul 23, 2017
b5d231d
minor renaming
fred84 Jul 23, 2017
187982a
resolve merge conflict and cleanup NumberFieldMapper out of range tests
fred84 Jul 27, 2017
d236158
Merge remote-tracking branch 'upstream/master' into 25534_reject_out_…
fred84 Jul 27, 2017
7423c4d
comments for disabled test
fred84 Jul 27, 2017
bced91e
tests for out of range values for long
fred84 Jul 28, 2017
7e07a63
min/max validation for short and integer numbertypes
fred84 Jul 29, 2017
61d8585
Merge remote-tracking branch 'upstream/master' into number_field_type…
fred84 Jul 29, 2017
8e88c9d
Merge remote-tracking branch 'upstream/master' into 25534_reject_out_…
fred84 Jul 30, 2017
11ce7c8
tests for byte/short/integer/long removed and will be added in separa…
fred84 Jul 30, 2017
3902911
remove unused import
fred84 Jul 30, 2017
eba60b7
Merge remote-tracking branch 'upstream/master' into number_field_type…
fred84 Jul 30, 2017
4dd7a83
25534_reject_out_of_range_numbers merged
fred84 Jul 30, 2017
8901920
merge and resolve conflict
fred84 Aug 10, 2017
1a32b05
tests for negative values
fred84 Aug 10, 2017
4e27edd
Make the README use a single type in examples. (#26098)
jpountz Aug 10, 2017
0bf8a35
Use `global_ordinals_hash` execution mode when sorting by sub aggrega…
jpountz Aug 10, 2017
3fc27b0
Document how to import Lucene Snapshot libs when elasticsearch client…
dadoonet Aug 10, 2017
076167f
inner hits: Unfiltered nested source should keep its full path
martijnvg Aug 8, 2017
9c372e5
Fix wrong header level
dadoonet Aug 10, 2017
99ac7be
Teach the build about betas and rcs (#26066)
nik9000 Aug 10, 2017
6f82b0c
Allow `ClusterState.Custom` to be created on initial cluster states (…
s1monw Aug 11, 2017
93cfbe2
Tests: reenable ShardReduceIT#testIpRange.
jpountz Aug 11, 2017
1011791
Remove SimpleQueryStringIT#testPhraseQueryOnFieldWithNoPositions.
jpountz Aug 11, 2017
637cc87
Remove unused Netty-related settings (#26161)
danielmitterdorfer Aug 11, 2017
636e85e
percolator: Hint what clauses are important in a conjunction query ba…
martijnvg Aug 7, 2017
73e936a
Fix serialization of the `_all` field. (#26143)
jpountz Aug 11, 2017
7e3cd6a
reindex: automatically choose the number of slices (#26030)
andyb-elastic Aug 11, 2017
1146a35
Move more token filters to analysis-common module
martijnvg Aug 3, 2017
9a90899
Fix incorrect class name in deleteByQuery docs (#26151)
hanbj Aug 11, 2017
e278eac
Fixed typo in README.textile (#26168)
JapSeyz Aug 11, 2017
10c3c1a
fix SplitProcessor targetField test (#26178)
talevy Aug 11, 2017
efe2adf
min/max validation for short/int/long
fred84 Aug 12, 2017
0eb0c01
Merge remote-tracking branch 'upstream/master' into number_field_type…
fred84 Aug 12, 2017
d76e214
fix checkstyle errors & return more informative message for malformed…
fred84 Aug 12, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 23 additions & 21 deletions README.textile
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,16 @@ Elasticsearch is a distributed RESTful search engine built for the cloud. Featur
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on any of the replica shards.
* Multi Tenant with Multi Types.
* Multi Tenant.
** Support for more than one index.
** Support for more than one type per index.
** Index level configuration (number of shards, index storage, ...).
* Various set of APIs
** HTTP RESTful API
** Native Java API.
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined per type for customization of the indexing process.
** Schema can be defined for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* (Near) Real Time Search.
* Built on top of Lucene
Expand All @@ -47,32 +46,37 @@ h3. Installation

h3. Indexing

Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
Let's try and index some twitter like information. First, let's index some tweets (the @twitter@ index will be created automatically):

<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'

curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -H 'Content-Type: application/json' -d '
curl -XPUT 'http://localhost:9200/twitter/doc/1?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'

curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -H 'Content-Type: application/json' -d '
curl -XPUT 'http://localhost:9200/twitter/doc/2?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'

curl -XPUT 'http://localhost:9200/twitter/doc/3?pretty' -H 'Content-Type: application/json' -d '
{
"user": "elastic",
"post_date": "2010-01-15T01:46:38",
"message": "Building the site, should be kewl"
}'
</pre>

Now, let's see if the information was added by GETting it:

<pre>
curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
curl -XGET 'http://localhost:9200/twitter/doc/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/doc/2?pretty=true'
curl -XGET 'http://localhost:9200/twitter/doc/3?pretty=true'
</pre>

h3. Searching
Expand All @@ -81,21 +85,21 @@ Mmm search..., shouldn't it be elastic?
Let's find all the tweets that @kimchy@ posted:

<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
curl -XGET 'http://localhost:9200/twitter/_search?q=user:kimchy&pretty=true'
</pre>

We can also use the JSON query language Elasticsearch provides instead of a query string:

<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -H 'Content-Type: application/json' -d '
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match" : { "user": "kimchy" }
}
}'
</pre>

Just for kicks, let's get all the documents stored (we should see the user as well):
Just for kicks, let's get all the documents stored (we should see the tweet from @elastic@ as well):

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
Expand All @@ -106,7 +110,7 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type:
}'
</pre>

We can also do range search (the @postDate@ was automatically identified as date)
We can also do range search (the @post_date@ was automatically identified as date)

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
Expand All @@ -125,29 +129,27 @@ h3. Multi Tenant - Indices and Types

Man, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.

Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Elasticsearch supports multiple indices. In the previous example we used an index called @twitter@ that stored tweets for every user.

Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:

<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'

curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -H 'Content-Type: application/json' -d '
curl -XPUT 'http://localhost:9200/kimchy/doc/1?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'

curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -H 'Content-Type: application/json' -d '
curl -XPUT 'http://localhost:9200/kimchy/doc/2?pretty' -H 'Content-Type: application/json' -d '
{
"user": "kimchy",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>

The above will index information into the @kimchy@ index, with two types, @info@ and @tweet@. Each user will get their own special index.
The above will index information into the @kimchy@ index. Each user will get their own special index.

Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):

Expand Down
29 changes: 19 additions & 10 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,10 @@ configure(subprojects.findAll { it.projectDir.toPath().startsWith(rootPath) }) {

/* Introspect all versions of ES that may be tested agains for backwards
* compatibility. It is *super* important that this logic is the same as the
* logic in VersionUtils.java, modulo alphas, betas, and rcs which are ignored
* in gradle because they don't have any backwards compatibility guarantees
* but are not ignored in VersionUtils.java because the tests expect them not
* to be. */
* logic in VersionUtils.java, throwing out alphas because they don't have any
* backwards compatibility guarantees and only keeping the latest beta or rc
* in a branch if there are only betas and rcs in the branch so we have
* *something* to test against. */
Version currentVersion = Version.fromString(VersionProperties.elasticsearch.minus('-SNAPSHOT'))
int prevMajor = currentVersion.major - 1
File versionFile = file('core/src/main/java/org/elasticsearch/Version.java')
Expand All @@ -84,11 +84,20 @@ for (String line : versionLines) {
int major = Integer.parseInt(match.group(1))
int minor = Integer.parseInt(match.group(2))
int bugfix = Integer.parseInt(match.group(3))
Version foundVersion = new Version(major, minor, bugfix, false)
String suffix = (match.group(4) ?: '').replace('_', '-')
Version foundVersion = new Version(major, minor, bugfix, suffix, false)
if (currentVersion != foundVersion
&& (major == prevMajor || major == currentVersion.major)
&& (versions.isEmpty() || versions.last() != foundVersion)) {
versions.add(foundVersion)
&& (major == prevMajor || major == currentVersion.major)) {
if (versions.isEmpty() || versions.last() != foundVersion) {
versions.add(foundVersion)
} else {
// Replace the earlier betas with later ones
Version last = versions.set(versions.size() - 1, foundVersion)
if (last.suffix == '') {
throw new InvalidUserDataException("Found two equal versions but"
+ " the first one [$last] wasn't a beta.")
}
}
if (major == prevMajor && minor > lastPrevMinor) {
prevMinorIndex = versions.size() - 1
lastPrevMinor = minor
Expand All @@ -106,10 +115,10 @@ if (currentVersion.bugfix == 0) {
// unreleased version of closest branch. So for those cases, the version includes -SNAPSHOT,
// and the bwc distribution will checkout and build that version.
Version last = versions[-1]
versions[-1] = new Version(last.major, last.minor, last.bugfix, true)
versions[-1] = new Version(last.major, last.minor, last.bugfix, last.suffix, true)
if (last.bugfix == 0) {
versions[-2] = new Version(
versions[-2].major, versions[-2].minor, versions[-2].bugfix, true)
versions[-2].major, versions[-2].minor, versions[-2].bugfix, versions[-2].suffix, true)
}
}

Expand Down
27 changes: 17 additions & 10 deletions buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@
package org.elasticsearch.gradle

import groovy.transform.Sortable
import java.util.regex.Matcher
import org.gradle.api.InvalidUserDataException

/**
* Encapsulates comparison and printing logic for an x.y.z version.
Expand All @@ -32,32 +34,37 @@ public class Version {
final int bugfix
final int id
final boolean snapshot
/**
* Suffix on the version name. Unlike Version.java the build does not
* consider alphas and betas different versions, it just preserves the
* suffix that the version was declared with in Version.java.
*/
final String suffix

public Version(int major, int minor, int bugfix, boolean snapshot) {
public Version(int major, int minor, int bugfix,
String suffix, boolean snapshot) {
this.major = major
this.minor = minor
this.bugfix = bugfix
this.snapshot = snapshot
this.suffix = suffix
this.id = major * 100000 + minor * 1000 + bugfix * 10 +
(snapshot ? 1 : 0)
}

public static Version fromString(String s) {
String[] parts = s.split('\\.')
String bugfix = parts[2]
boolean snapshot = false
if (bugfix.contains('-')) {
snapshot = bugfix.endsWith('-SNAPSHOT')
bugfix = bugfix.split('-')[0]
Matcher m = s =~ /(\d+)\.(\d+)\.(\d+)(-alpha\d+|-beta\d+|-rc\d+)?(-SNAPSHOT)?/
if (m.matches() == false) {
throw new InvalidUserDataException("Invalid version [${s}]")
}
return new Version(parts[0] as int, parts[1] as int, bugfix as int,
snapshot)
return new Version(m.group(1) as int, m.group(2) as int,
m.group(3) as int, m.group(4) ?: '', m.group(5) != null)
}

@Override
public String toString() {
String snapshotStr = snapshot ? '-SNAPSHOT' : ''
return "${major}.${minor}.${bugfix}${snapshotStr}"
return "${major}.${minor}.${bugfix}${suffix}${snapshotStr}"
}

public boolean before(String compareTo) {
Expand Down
23 changes: 17 additions & 6 deletions core/src/main/java/org/elasticsearch/cluster/ClusterModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@

import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.List;
Expand All @@ -94,7 +95,6 @@ public class ClusterModule extends AbstractModule {
private final IndexNameExpressionResolver indexNameExpressionResolver;
private final AllocationDeciders allocationDeciders;
private final AllocationService allocationService;
private final Runnable onStarted;
// pkg private for tests
final Collection<AllocationDecider> deciderList;
final ShardsAllocator shardsAllocator;
Expand All @@ -107,9 +107,24 @@ public ClusterModule(Settings settings, ClusterService clusterService, List<Clus
this.clusterService = clusterService;
this.indexNameExpressionResolver = new IndexNameExpressionResolver(settings);
this.allocationService = new AllocationService(settings, allocationDeciders, shardsAllocator, clusterInfoService);
this.onStarted = () -> clusterPlugins.forEach(plugin -> plugin.onNodeStarted());
}

public static Map<String, Supplier<ClusterState.Custom>> getClusterStateCustomSuppliers(List<ClusterPlugin> clusterPlugins) {
final Map<String, Supplier<ClusterState.Custom>> customSupplier = new HashMap<>();
customSupplier.put(SnapshotDeletionsInProgress.TYPE, SnapshotDeletionsInProgress::new);
customSupplier.put(RestoreInProgress.TYPE, RestoreInProgress::new);
customSupplier.put(SnapshotsInProgress.TYPE, SnapshotsInProgress::new);
for (ClusterPlugin plugin : clusterPlugins) {
Map<String, Supplier<ClusterState.Custom>> initialCustomSupplier = plugin.getInitialClusterStateCustomSupplier();
for (String key : initialCustomSupplier.keySet()) {
if (customSupplier.containsKey(key)) {
throw new IllegalStateException("custom supplier key [" + key + "] is registered more than once");
}
}
customSupplier.putAll(initialCustomSupplier);
}
return Collections.unmodifiableMap(customSupplier);
}

public static List<Entry> getNamedWriteables() {
List<Entry> entries = new ArrayList<>();
Expand Down Expand Up @@ -243,8 +258,4 @@ protected void configure() {
bind(AllocationDeciders.class).toInstance(allocationDeciders);
bind(ShardsAllocator.class).toInstance(shardsAllocator);
}

public Runnable onStarted() {
return onStarted;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,6 @@ public class RestoreInProgress extends AbstractNamedDiffable<Custom> implements

private final List<Entry> entries;

/**
* Constructs new restore metadata
*
* @param entries list of currently running restore processes
*/
public RestoreInProgress(List<Entry> entries) {
this.entries = entries;
}

/**
* Constructs new restore metadata
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,10 @@ public class SnapshotDeletionsInProgress extends AbstractNamedDiffable<Custom> i
// the list of snapshot deletion request entries
private final List<Entry> entries;

public SnapshotDeletionsInProgress() {
this(Collections.emptyList());
}

private SnapshotDeletionsInProgress(List<Entry> entries) {
this.entries = Collections.unmodifiableList(entries);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ private Decision canMove(ShardRouting shardRouting, RoutingAllocation allocation
// Only primary shards are snapshotted

SnapshotsInProgress snapshotsInProgress = allocation.custom(SnapshotsInProgress.TYPE);
if (snapshotsInProgress == null) {
if (snapshotsInProgress == null || snapshotsInProgress.entries().isEmpty()) {
// Snapshots are not running
return allocation.decision(Decision.YES, NAME, "no snapshots are currently running");
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,10 @@ public interface ClusterApplier {
* @param listener callback that is invoked after cluster state is applied
*/
void onNewClusterState(String source, Supplier<ClusterState> clusterStateSupplier, ClusterStateTaskListener listener);

/**
* Creates a new cluster state builder that is initialized with the cluster name and all initial cluster state customs.
*/
ClusterState.Builder newClusterStateBuilder();

}
Original file line number Diff line number Diff line change
Expand Up @@ -97,14 +97,17 @@ public class ClusterApplierService extends AbstractLifecycleComponent implements
private final AtomicReference<ClusterState> state; // last applied state

private NodeConnectionsService nodeConnectionsService;
private Supplier<ClusterState.Builder> stateBuilderSupplier;

public ClusterApplierService(Settings settings, ClusterSettings clusterSettings, ThreadPool threadPool) {
public ClusterApplierService(Settings settings, ClusterSettings clusterSettings, ThreadPool threadPool, Supplier<ClusterState
.Builder> stateBuilderSupplier) {
super(settings);
this.clusterSettings = clusterSettings;
this.threadPool = threadPool;
this.state = new AtomicReference<>();
this.slowTaskLoggingThreshold = CLUSTER_SERVICE_SLOW_TASK_LOGGING_THRESHOLD_SETTING.get(settings);
this.localNodeMasterListeners = new LocalNodeMasterListeners(threadPool);
this.stateBuilderSupplier = stateBuilderSupplier;
}

public void setSlowTaskLoggingThreshold(TimeValue slowTaskLoggingThreshold) {
Expand Down Expand Up @@ -653,4 +656,9 @@ public void run() {
protected long currentTimeInNanos() {
return System.nanoTime();
}

@Override
public ClusterState.Builder newClusterStateBuilder() {
return stateBuilderSupplier.get();
}
}
Loading