Compare commits

...

23 Commits

Author SHA1 Message Date
Mark Paluch
008ca72888 Release version 3.1.13 (2020.0.13).
See #3768
2021-09-17 08:53:37 +02:00
Mark Paluch
c50dfba2cf Prepare 3.1.13 (2020.0.13).
See #3768
2021-09-17 08:53:09 +02:00
Christoph Strobl
701153ac8f Fix slice argument in query fields projection.
We now use a Collection instead of an Array to pass on $slice projection values for offset and limit.

Closes: #3811
Original pull request: #3812.
2021-09-08 14:47:48 +02:00
Mark Paluch
6b394e4da6 Avoid nested Document conversion to primitive types for fields with an explicit write target.
We now no longer attempt to convert query Documents into primitive types to avoid e.g. Document to String conversion.

Closes: #3783
Original Pull Request: #3797
2021-09-02 12:18:56 +02:00
Mark Paluch
7007f484a8 Polishing.
Fix typo in reference docs.

See #3758
2021-08-25 10:15:23 +02:00
Ryan Gibb
72a24a923d Fix a typo in MongoConverter javadoc.
Original pull request: #3758.
2021-08-25 10:15:23 +02:00
Mark Paluch
7f5591c20c Polishing.
Fix asterisk callouts.

See #3786
2021-08-24 11:19:06 +02:00
Mark Paluch
bf3b5dee70 Extract Aggregation Framework and GridFS docs in own source files.
Closes #3786
2021-08-24 11:09:47 +02:00
Jens Schauder
11838839cf After release cleanups.
See #3734
2021-08-12 10:36:57 +02:00
Jens Schauder
116d384bc5 Prepare next development iteration.
See #3734
2021-08-12 10:36:56 +02:00
Jens Schauder
6ce5a26dc6 Release version 3.1.12 (2020.0.12).
See #3734
2021-08-12 10:25:55 +02:00
Jens Schauder
2716d1d503 Prepare 3.1.12 (2020.0.12).
See #3734
2021-08-12 10:25:34 +02:00
Mark Paluch
32223fc00a Run unpaged query using Pageable.unpaged() through QuerydslMongoPredicateExecutor.findAll(…).
We now correctly consider unpaged queries if the Pageable is unpaged.

Closes: #3751
Original Pull Request: #3754
2021-07-26 15:23:42 +02:00
Jens Schauder
1884b7a97a After release cleanups.
See #3681
2021-07-16 10:45:19 +02:00
Jens Schauder
1f670bb5ed Prepare next development iteration.
See #3681
2021-07-16 10:45:17 +02:00
Jens Schauder
3e97d47248 Release version 3.1.11 (2020.0.11).
See #3681
2021-07-16 10:19:52 +02:00
Jens Schauder
c02840ca30 Prepare 3.1.11 (2020.0.11).
See #3681
2021-07-16 10:18:57 +02:00
Jens Schauder
975768c0d6 Updated changelog.
See #3681
2021-07-16 10:18:49 +02:00
Christoph Strobl
323ec3f1d6 Polishing.
Simplify KeyMapper current property/index setup.

Original Pull Request: #3689
2021-07-06 12:32:08 +02:00
David Julia
e480ceb2b7 Fix Regression in generating queries with nested maps with numeric keys.
While maps that have numeric keys work if there is only one map with an integer key, when there are multiple maps with numeric keys in a given query, it fails.

Take the following example for a map called outer with numeric keys holding reference to another object with a map called inner with numeric keys: Updates that are meant to generate {"$set": {"outerMap.1234.inner.5678": "hello"}} are instead generating {"$set": {"outerMap.1234.inner.inner": "hello"}}, repeating the later map property name instead of using the integer key value.

This commit adds unit tests both for the UpdateMapper and QueryMapper, which check multiple consecutive maps with numeric keys, and adds a fix in the KeyMapper. Because we cannot easily change the path parsing to somehow parse path parts corresponding to map keys differently, we address the issue in the KeyMapper. We keep track of the partial path corresponding to the current property and use it to skip adding the duplicated property name for the map to the query, and instead add the key.

This is a bit redundant in that we now have both an iterator and an index-based way of accessing the path parts, but it gets the tests passing and fixes the issue without making a large change to the current approach.

Fixes: #3688
Original Pull Request: #3689
2021-07-06 12:12:46 +02:00
Mark Paluch
9fedd8d8c3 Updated changelog.
See #3650
2021-06-22 16:07:27 +02:00
Mark Paluch
baddae25da After release cleanups.
See #3649
2021-06-22 15:28:51 +02:00
Mark Paluch
5064cf3e9a Prepare next development iteration.
See #3649
2021-06-22 15:28:47 +02:00
17 changed files with 937 additions and 791 deletions

View File

@@ -5,7 +5,7 @@
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb-parent</artifactId>
<version>3.1.10</version>
<version>3.1.13</version>
<packaging>pom</packaging>
<name>Spring Data MongoDB</name>
@@ -15,7 +15,7 @@
<parent>
<groupId>org.springframework.data.build</groupId>
<artifactId>spring-data-parent</artifactId>
<version>2.4.10</version>
<version>2.4.13</version>
</parent>
<modules>
@@ -26,7 +26,7 @@
<properties>
<project.type>multi</project.type>
<dist.id>spring-data-mongodb</dist.id>
<springdata.commons>2.4.10</springdata.commons>
<springdata.commons>2.4.13</springdata.commons>
<mongo>4.1.2</mongo>
<mongo.reactivestreams>${mongo}</mongo.reactivestreams>
<jmh.version>1.19</jmh.version>

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb-parent</artifactId>
<version>3.1.10</version>
<version>3.1.13</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -14,7 +14,7 @@
<parent>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb-parent</artifactId>
<version>3.1.10</version>
<version>3.1.13</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -11,7 +11,7 @@
<parent>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb-parent</artifactId>
<version>3.1.10</version>
<version>3.1.13</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -39,13 +39,14 @@ import com.mongodb.DBRef;
* @author Thomas Darimont
* @author Christoph Strobl
* @author Mark Paluch
* @author Ryan Gibb
*/
public interface MongoConverter
extends EntityConverter<MongoPersistentEntity<?>, MongoPersistentProperty, Object, Bson>, MongoWriter<Object>,
EntityReader<Object, Bson> {
/**
* Returns thw {@link TypeMapper} being used to write type information into {@link Document}s created with that
* Returns the {@link TypeMapper} being used to write type information into {@link Document}s created with that
* converter.
*
* @return will never be {@literal null}.

View File

@@ -66,6 +66,7 @@ import com.mongodb.DBRef;
* @author Thomas Darimont
* @author Christoph Strobl
* @author Mark Paluch
* @author David Julia
*/
public class QueryMapper {
@@ -695,7 +696,8 @@ public class QueryMapper {
@Nullable
private Object applyFieldTargetTypeHintToValue(Field documentField, @Nullable Object value) {
if (value == null || documentField.getProperty() == null || !documentField.getProperty().hasExplicitWriteTarget()) {
if (value == null || documentField.getProperty() == null || !documentField.getProperty().hasExplicitWriteTarget()
|| value instanceof Document || value instanceof DBObject) {
return value;
}
@@ -1273,12 +1275,17 @@ public class QueryMapper {
static class KeyMapper {
private final Iterator<String> iterator;
private int currentIndex;
private String currentPropertyRoot;
private final List<String> pathParts;
public KeyMapper(String key,
MappingContext<? extends MongoPersistentEntity<?>, MongoPersistentProperty> mappingContext) {
this.iterator = Arrays.asList(key.split("\\.")).iterator();
this.iterator.next();
this.pathParts = Arrays.asList(key.split("\\."));
this.iterator = pathParts.iterator();
this.currentPropertyRoot = iterator.next();
this.currentIndex = 0;
}
/**
@@ -1290,21 +1297,31 @@ public class QueryMapper {
protected String mapPropertyName(MongoPersistentProperty property) {
StringBuilder mappedName = new StringBuilder(PropertyToFieldNameConverter.INSTANCE.convert(property));
boolean inspect = iterator.hasNext();
while (inspect) {
String partial = iterator.next();
currentIndex++;
boolean isPositional = isPositionalParameter(partial) && property.isCollectionLike();
boolean isPositional = isPositionalParameter(partial) && property.isCollectionLike() ;
if(property.isMap() && currentPropertyRoot.equals(partial) && iterator.hasNext()){
partial = iterator.next();
currentIndex++;
}
if (isPositional || property.isMap()) {
if (isPositional || property.isMap() && !currentPropertyRoot.equals(partial)) {
mappedName.append(".").append(partial);
}
inspect = isPositional && iterator.hasNext();
}
if(currentIndex + 1 < pathParts.size()) {
currentIndex++;
currentPropertyRoot = pathParts.get(currentIndex);
}
return mappedName.toString();
}

View File

@@ -15,6 +15,7 @@
*/
package org.springframework.data.mongodb.core.query;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
@@ -136,7 +137,7 @@ public class Field {
*/
public Field slice(String field, int offset, int size) {
slices.put(field, new Integer[] { offset, size });
slices.put(field, Arrays.asList(offset, size));
return this;
}

View File

@@ -212,6 +212,10 @@ public class QuerydslMongoPredicateExecutor<T> extends QuerydslPredicateExecutor
*/
private SpringDataMongodbQuery<T> applyPagination(SpringDataMongodbQuery<T> query, Pageable pageable) {
if (pageable.isUnpaged()) {
return query;
}
query = query.offset(pageable.getOffset()).limit(pageable.getPageSize());
return applySorting(query, pageable.getSort());
}

View File

@@ -3703,6 +3703,23 @@ public class MongoTemplateTests {
assertThat(template.find(new BasicQuery("{}").with(Sort.by("id")), WithIdAndFieldAnnotation.class)).isNotEmpty();
}
@Test // GH-3811
public void sliceShouldLimitCollectionValues() {
DocumentWithCollectionOfSimpleType source = new DocumentWithCollectionOfSimpleType();
source.id = "id-1";
source.values = Arrays.asList("spring", "data", "mongodb");
template.save(source);
Criteria criteria = Criteria.where("id").is(source.id);
Query query = Query.query(criteria);
query.fields().slice("values", 0, 1);
DocumentWithCollectionOfSimpleType target = template.findOne(query, DocumentWithCollectionOfSimpleType.class);
assertThat(target.values).containsExactly("spring");
}
private AtomicReference<ImmutableVersioned> createAfterSaveReference() {
AtomicReference<ImmutableVersioned> saved = new AtomicReference<>();

View File

@@ -33,11 +33,6 @@ import org.bson.types.Code;
import org.bson.types.ObjectId;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import org.mockito.junit.jupiter.MockitoSettings;
import org.mockito.quality.Strictness;
import org.springframework.core.convert.converter.Converter;
import org.springframework.data.annotation.Id;
@@ -45,7 +40,6 @@ import org.springframework.data.convert.WritingConverter;
import org.springframework.data.domain.Sort;
import org.springframework.data.domain.Sort.Direction;
import org.springframework.data.geo.Point;
import org.springframework.data.mongodb.MongoDatabaseFactory;
import org.springframework.data.mongodb.core.DocumentTestUtils;
import org.springframework.data.mongodb.core.Person;
import org.springframework.data.mongodb.core.geo.GeoJsonPoint;
@@ -55,6 +49,7 @@ import org.springframework.data.mongodb.core.mapping.DBRef;
import org.springframework.data.mongodb.core.mapping.Document;
import org.springframework.data.mongodb.core.mapping.Field;
import org.springframework.data.mongodb.core.mapping.FieldType;
import org.springframework.data.mongodb.core.mapping.MongoId;
import org.springframework.data.mongodb.core.mapping.MongoMappingContext;
import org.springframework.data.mongodb.core.mapping.MongoPersistentEntity;
import org.springframework.data.mongodb.core.mapping.TextScore;
@@ -75,23 +70,23 @@ import com.mongodb.client.model.Filters;
* @author Thomas Darimont
* @author Christoph Strobl
* @author Mark Paluch
* @author David Julia
*/
@ExtendWith(MockitoExtension.class)
@MockitoSettings(strictness = Strictness.LENIENT)
public class QueryMapperUnitTests {
private QueryMapper mapper;
private MongoMappingContext context;
private MappingMongoConverter converter;
@Mock MongoDatabaseFactory factory;
@BeforeEach
void beforeEach() {
MongoCustomConversions conversions = new MongoCustomConversions();
this.context = new MongoMappingContext();
this.context.setSimpleTypeHolder(conversions.getSimpleTypeHolder());
this.converter = new MappingMongoConverter(new DefaultDbRefResolver(factory), context);
this.converter = new MappingMongoConverter(NoOpDbRefResolver.INSTANCE, context);
this.converter.setCustomConversions(conversions);
this.converter.afterPropertiesSet();
this.mapper = new QueryMapper(converter);
@@ -737,6 +732,28 @@ public class QueryMapperUnitTests {
assertThat(document).containsKey("map.1.stringProperty");
}
@Test // GH-3688
void mappingShouldRetainNestedNumericMapKeys() {
Query query = query(where("outerMap.1.map.2.stringProperty").is("ba'alzamon"));
org.bson.Document document = mapper.getMappedObject(query.getQueryObject(),
context.getPersistentEntity(EntityWithIntKeyedMapOfMap.class));
assertThat(document).containsKey("outerMap.1.map.2.stringProperty");
}
@Test // GH-3688
void mappingShouldAllowSettingEntireNestedNumericKeyedMapValue() {
Query query = query(where("outerMap.1.map").is(null)); //newEntityWithComplexValueTypeMap()
org.bson.Document document = mapper.getMappedObject(query.getQueryObject(),
context.getPersistentEntity(EntityWithIntKeyedMapOfMap.class));
assertThat(document).containsKey("outerMap.1.map");
}
@Test // DATAMONGO-1269
void mappingShouldRetainNumericPositionInList() {
@@ -1129,6 +1146,25 @@ public class QueryMapperUnitTests {
.isEqualTo(new org.bson.Document("address.street", "1007 Mountain Drive"));
}
@Test // GH-3783
void retainsId$InWithStringArray() {
org.bson.Document mappedQuery = mapper.getMappedObject(
org.bson.Document.parse("{ _id : { $in: [\"5b8bedceb1e0bfc07b008828\"]}}"),
context.getPersistentEntity(WithExplicitStringId.class));
assertThat(mappedQuery.get("_id")).isEqualTo(org.bson.Document.parse("{ $in: [\"5b8bedceb1e0bfc07b008828\"]}"));
}
@Test // GH-3783
void mapsId$InInToObjectIds() {
org.bson.Document mappedQuery = mapper.getMappedObject(
org.bson.Document.parse("{ _id : { $in: [\"5b8bedceb1e0bfc07b008828\"]}}"),
context.getPersistentEntity(ClassWithDefaultId.class));
assertThat(mappedQuery.get("_id"))
.isEqualTo(org.bson.Document.parse("{ $in: [ {$oid: \"5b8bedceb1e0bfc07b008828\" } ]}"));
}
class WithDeepArrayNesting {
List<WithNestedArray> level0;
@@ -1192,6 +1228,18 @@ public class QueryMapperUnitTests {
@Id private String foo;
}
class WithStringId {
@MongoId String id;
String name;
}
class WithExplicitStringId {
@MongoId(FieldType.STRING) String id;
String name;
}
class BigIntegerId {
@Id private BigInteger id;
@@ -1278,6 +1326,10 @@ public class QueryMapperUnitTests {
Map<Integer, SimpleEntityWithoutId> map;
}
static class EntityWithIntKeyedMapOfMap{
Map<Integer, EntityWithComplexValueTypeMap> outerMap;
}
static class EntityWithComplexValueTypeList {
List<SimpleEntityWithoutId> list;
}

View File

@@ -65,6 +65,7 @@ import com.mongodb.DBRef;
* @author Thomas Darimont
* @author Mark Paluch
* @author Pavel Vodrazka
* @author David Julia
*/
@ExtendWith(MockitoExtension.class)
class UpdateMapperUnitTests {
@@ -1110,6 +1111,16 @@ class UpdateMapperUnitTests {
.isEqualTo("{\"$set\": {\"map.601218778970110001827396.value\": \"testing\"}}");
}
@Test // GH-3688
void multipleNumericKeysInNestedPath() {
Update update = new Update().set("intKeyedMap.12345.map.0", "testing");
Document mappedUpdate = mapper.getMappedObject(update.getUpdateObject(),
context.getPersistentEntity(EntityWithIntKeyedMap.class));
assertThat(mappedUpdate).isEqualTo("{\"$set\": {\"intKeyedMap.12345.map.0\": \"testing\"}}");
}
@Test // GH-3566
void mapsObjectClassPropertyFieldInMapValueTypeAsKey() {
@@ -1357,6 +1368,10 @@ class UpdateMapperUnitTests {
Map<Object, NestedDocument> concreteMap;
}
static class EntityWithIntKeyedMap{
Map<Integer, EntityWithObjectMap> intKeyedMap;
}
static class ClassWithEnum {
Allocation allocation;

View File

@@ -27,6 +27,8 @@ import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.dao.IncorrectResultSizeDataAccessException;
import org.springframework.dao.PermissionDeniedDataAccessException;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.domain.Sort.Direction;
import org.springframework.data.mongodb.MongoDatabaseFactory;
@@ -122,6 +124,20 @@ public class QuerydslMongoPredicateExecutorIntegrationTests {
.containsExactly(dave);
}
@Test // GH-3751
public void findPage() {
assertThat(repository
.findAll(person.lastname.startsWith(oliver.getLastname()).and(person.firstname.startsWith(dave.getFirstname())),
PageRequest.of(0, 10))
.getContent()).containsExactly(dave);
assertThat(repository
.findAll(person.lastname.startsWith(oliver.getLastname()).and(person.firstname.startsWith(dave.getFirstname())),
Pageable.unpaged())
.getContent()).containsExactly(dave);
}
@Test // DATAMONGO-362, DATAMONGO-1848
public void springDataMongodbQueryShouldAllowJoinOnDBref() {

View File

@@ -0,0 +1,650 @@
[[mongo.aggregation]]
== Aggregation Framework Support
Spring Data MongoDB provides support for the Aggregation Framework introduced to MongoDB in version 2.2.
For further information, see the full https://docs.mongodb.org/manual/aggregation/[reference documentation] of the aggregation framework and other data aggregation tools for MongoDB.
[[mongo.aggregation.basic-concepts]]
=== Basic Concepts
The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions: `Aggregation`, `AggregationOperation`, and `AggregationResults`.
* `Aggregation`
+
An `Aggregation` represents a MongoDB `aggregate` operation and holds the description of the aggregation pipeline instructions. Aggregations are created by invoking the appropriate `newAggregation(…)` static factory method of the `Aggregation` class, which takes a list of `AggregateOperation` and an optional input class.
+
The actual aggregate operation is run by the `aggregate` method of the `MongoTemplate`, which takes the desired output class as a parameter.
+
* `TypedAggregation`
+
A `TypedAggregation`, just like an `Aggregation`, holds the instructions of the aggregation pipeline and a reference to the input type, that is used for mapping domain properties to actual document fields.
+
At runtime, field references get checked against the given input type, considering potential `@Field` annotations and raising errors when referencing nonexistent properties.
+
* `AggregationOperation`
+
An `AggregationOperation` represents a MongoDB aggregation pipeline operation and describes the processing that should be performed in this aggregation step. Although you could manually create an `AggregationOperation`, we recommend using the static factory methods provided by the `Aggregate` class to construct an `AggregateOperation`.
+
* `AggregationResults`
+
`AggregationResults` is the container for the result of an aggregate operation. It provides access to the raw aggregation result, in the form of a `Document` to the mapped objects and other information about the aggregation.
+
The following listing shows the canonical example for using the Spring Data MongoDB support for the MongoDB Aggregation Framework:
+
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
Aggregation agg = newAggregation(
pipelineOP1(),
pipelineOP2(),
pipelineOPn()
);
AggregationResults<OutputType> results = mongoTemplate.aggregate(agg, "INPUT_COLLECTION_NAME", OutputType.class);
List<OutputType> mappedResult = results.getMappedResults();
----
Note that, if you provide an input class as the first parameter to the `newAggregation` method, the `MongoTemplate` derives the name of the input collection from this class. Otherwise, if you do not not specify an input class, you must provide the name of the input collection explicitly. If both an input class and an input collection are provided, the latter takes precedence.
[[mongo.aggregation.supported-aggregation-operations]]
=== Supported Aggregation Operations
The MongoDB Aggregation Framework provides the following types of aggregation operations:
* Pipeline Aggregation Operators
* Group/Accumulator Aggregation Operators
* Boolean Aggregation Operators
* Comparison Aggregation Operators
* Arithmetic Aggregation Operators
* String Aggregation Operators
* Date Aggregation Operators
* Array Aggregation Operators
* Conditional Aggregation Operators
* Lookup Aggregation Operators
* Convert Aggregation Operators
* Object Aggregation Operators
* Script Aggregation Operators
At the time of this writing, we provide support for the following Aggregation Operations in Spring Data MongoDB:
.Aggregation Operations currently supported by Spring Data MongoDB
[cols="2*"]
|===
| Pipeline Aggregation Operators
| `bucket`, `bucketAuto`, `count`, `facet`, `geoNear`, `graphLookup`, `group`, `limit`, `lookup`, `match`, `project`, `replaceRoot`, `skip`, `sort`, `unwind`
| Set Aggregation Operators
| `setEquals`, `setIntersection`, `setUnion`, `setDifference`, `setIsSubset`, `anyElementTrue`, `allElementsTrue`
| Group/Accumulator Aggregation Operators
| `addToSet`, `first`, `last`, `max`, `min`, `avg`, `push`, `sum`, `count` (+++*+++), `stdDevPop`, `stdDevSamp`
| Arithmetic Aggregation Operators
| `abs`, `add` (+++*+++ via `plus`), `ceil`, `divide`, `exp`, `floor`, `ln`, `log`, `log10`, `mod`, `multiply`, `pow`, `round`, `sqrt`, `subtract` (+++*+++ via `minus`), `trunc`
| String Aggregation Operators
| `concat`, `substr`, `toLower`, `toUpper`, `strcasecmp`, `indexOfBytes`, `indexOfCP`, `split`, `strLenBytes`, `strLenCP`, `substrCP`, `trim`, `ltrim`, `rtim`
| Comparison Aggregation Operators
| `eq` (+++*+++ via `is`), `gt`, `gte`, `lt`, `lte`, `ne`
| Array Aggregation Operators
| `arrayElementAt`, `arrayToObject`, `concatArrays`, `filter`, `in`, `indexOfArray`, `isArray`, `range`, `reverseArray`, `reduce`, `size`, `slice`, `zip`
| Literal Operators
| `literal`
| Date Aggregation Operators
| `dayOfYear`, `dayOfMonth`, `dayOfWeek`, `year`, `month`, `week`, `hour`, `minute`, `second`, `millisecond`, `dateToString`, `dateFromString`, `dateFromParts`, `dateToParts`, `isoDayOfWeek`, `isoWeek`, `isoWeekYear`
| Variable Operators
| `map`
| Conditional Aggregation Operators
| `cond`, `ifNull`, `switch`
| Type Aggregation Operators
| `type`
| Convert Aggregation Operators
| `convert`, `toBool`, `toDate`, `toDecimal`, `toDouble`, `toInt`, `toLong`, `toObjectId`, `toString`
| Object Aggregation Operators
| `objectToArray`, `mergeObjects`
| Script Aggregation Operators
| `function`, `accumulator`
|===
+++*+++ The operation is mapped or added by Spring Data MongoDB.
Note that the aggregation operations not listed here are currently not supported by Spring Data MongoDB. Comparison aggregation operators are expressed as `Criteria` expressions.
[[mongo.aggregation.projection]]
=== Projection Expressions
Projection expressions are used to define the fields that are the outcome of a particular aggregation step. Projection expressions can be defined through the `project` method of the `Aggregation` class, either by passing a list of `String` objects or an aggregation framework `Fields` object. The projection can be extended with additional fields through a fluent API by using the `and(String)` method and aliased by using the `as(String)` method.
Note that you can also define fields with aliases by using the `Fields.field` static factory method of the aggregation framework, which you can then use to construct a new `Fields` instance. References to projected fields in later aggregation stages are valid only for the field names of included fields or their aliases (including newly defined fields and their aliases). Fields not included in the projection cannot be referenced in later aggregation stages. The following listings show examples of projection expression:
.Projection expression examples
====
[source,java]
----
// generates {$project: {name: 1, netPrice: 1}}
project("name", "netPrice")
// generates {$project: {thing1: $thing2}}
project().and("thing1").as("thing2")
// generates {$project: {a: 1, b: 1, thing2: $thing1}}
project("a","b").and("thing1").as("thing2")
----
====
.Multi-Stage Aggregation using Projection and Sorting
====
[source,java]
----
// generates {$project: {name: 1, netPrice: 1}}, {$sort: {name: 1}}
project("name", "netPrice"), sort(ASC, "name")
// generates {$project: {name: $firstname}}, {$sort: {name: 1}}
project().and("firstname").as("name"), sort(ASC, "name")
// does not work
project().and("firstname").as("name"), sort(ASC, "firstname")
----
====
More examples for project operations can be found in the `AggregationTests` class. Note that further details regarding the projection expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project[corresponding section] of the MongoDB Aggregation Framework reference documentation.
[[mongo.aggregation.facet]]
=== Faceted Classification
As of Version 3.4, MongoDB supports faceted classification by using the Aggregation Framework. A faceted classification uses semantic categories (either general or subject-specific) that are combined to create the full classification entry. Documents flowing through the aggregation pipeline are classified into buckets. A multi-faceted classification enables various aggregations on the same set of input documents, without needing to retrieve the input documents multiple times.
==== Buckets
Bucket operations categorize incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. Bucket operations require a grouping field or a grouping expression. You can define them by using the `bucket()` and `bucketAuto()` methods of the `Aggregate` class. `BucketOperation` and `BucketAutoOperation` can expose accumulations based on aggregation expressions for input documents. You can extend the bucket operation with additional parameters through a fluent API by using the `with…()` methods and the `andOutput(String)` method. You can alias the operation by using the `as(String)` method. Each bucket is represented as a document in the output.
`BucketOperation` takes a defined set of boundaries to group incoming documents into these categories. Boundaries are required to be sorted. The following listing shows some examples of bucket operations:
.Bucket operation examples
====
[source,java]
----
// generates {$bucket: {groupBy: $price, boundaries: [0, 100, 400]}}
bucket("price").withBoundaries(0, 100, 400);
// generates {$bucket: {groupBy: $price, default: "Other" boundaries: [0, 100]}}
bucket("price").withBoundaries(0, 100).withDefault("Other");
// generates {$bucket: {groupBy: $price, boundaries: [0, 100], output: { count: { $sum: 1}}}}
bucket("price").withBoundaries(0, 100).andOutputCount().as("count");
// generates {$bucket: {groupBy: $price, boundaries: [0, 100], 5, output: { titles: { $push: "$title"}}}
bucket("price").withBoundaries(0, 100).andOutput("title").push().as("titles");
----
====
`BucketAutoOperation` determines boundaries in an attempt to evenly distribute documents into a specified number of buckets. `BucketAutoOperation` optionally takes a granularity value that specifies the https://en.wikipedia.org/wiki/Preferred_number[preferred number] series to use to ensure that the calculated boundary edges end on preferred round numbers or on powers of 10. The following listing shows examples of bucket operations:
.Bucket operation examples
====
[source,java]
----
// generates {$bucketAuto: {groupBy: $price, buckets: 5}}
bucketAuto("price", 5)
// generates {$bucketAuto: {groupBy: $price, buckets: 5, granularity: "E24"}}
bucketAuto("price", 5).withGranularity(Granularities.E24).withDefault("Other");
// generates {$bucketAuto: {groupBy: $price, buckets: 5, output: { titles: { $push: "$title"}}}
bucketAuto("price", 5).andOutput("title").push().as("titles");
----
====
To create output fields in buckets, bucket operations can use `AggregationExpression` through `andOutput()` and <<mongo.aggregation.projection.expressions, SpEL expressions>> through `andOutputExpression()`.
Note that further details regarding bucket expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/bucket/[`$bucket` section] and
https://docs.mongodb.org/manual/reference/operator/aggregation/bucketAuto/[`$bucketAuto` section] of the MongoDB Aggregation Framework reference documentation.
==== Multi-faceted Aggregation
Multiple aggregation pipelines can be used to create multi-faceted aggregations that characterize data across multiple dimensions (or facets) within a single aggregation stage. Multi-faceted aggregations provide multiple filters and categorizations to guide data browsing and analysis. A common implementation of faceting is how many online retailers provide ways to narrow down search results by applying filters on product price, manufacturer, size, and other factors.
You can define a `FacetOperation` by using the `facet()` method of the `Aggregation` class. You can customize it with multiple aggregation pipelines by using the `and()` method. Each sub-pipeline has its own field in the output document where its results are stored as an array of documents.
Sub-pipelines can project and filter input documents prior to grouping. Common use cases include extraction of date parts or calculations before categorization. The following listing shows facet operation examples:
.Facet operation examples
====
[source,java]
----
// generates {$facet: {categorizedByPrice: [ { $match: { price: {$exists : true}}}, { $bucketAuto: {groupBy: $price, buckets: 5}}]}}
facet(match(Criteria.where("price").exists(true)), bucketAuto("price", 5)).as("categorizedByPrice"))
// generates {$facet: {categorizedByCountry: [ { $match: { country: {$exists : true}}}, { $sortByCount: "$country"}]}}
facet(match(Criteria.where("country").exists(true)), sortByCount("country")).as("categorizedByCountry"))
// generates {$facet: {categorizedByYear: [
// { $project: { title: 1, publicationYear: { $year: "publicationDate"}}},
// { $bucketAuto: {groupBy: $price, buckets: 5, output: { titles: {$push:"$title"}}}
// ]}}
facet(project("title").and("publicationDate").extractYear().as("publicationYear"),
bucketAuto("publicationYear", 5).andOutput("title").push().as("titles"))
.as("categorizedByYear"))
----
====
Note that further details regarding facet operation can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/facet/[`$facet` section] of the MongoDB Aggregation Framework reference documentation.
[[mongo.aggregation.sort-by-count]]
==== Sort By Count
Sort by count operations group incoming documents based on the value of a specified expression, compute the count of documents in each distinct group, and sort the results by count. It offers a handy shortcut to apply sorting when using <<mongo.aggregation.facet>>. Sort by count operations require a grouping field or grouping expression. The following listing shows a sort by count example:
.Sort by count example
====
[source,java]
----
// generates { $sortByCount: "$country" }
sortByCount("country");
----
====
A sort by count operation is equivalent to the following BSON (Binary JSON):
----
{ $group: { _id: <expression>, count: { $sum: 1 } } },
{ $sort: { count: -1 } }
----
[[mongo.aggregation.projection.expressions]]
==== Spring Expression Support in Projection Expressions
We support the use of SpEL expressions in projection expressions through the `andExpression` method of the `ProjectionOperation` and `BucketOperation` classes. This feature lets you define the desired expression as a SpEL expression. On running a query, the SpEL expression is translated into a corresponding MongoDB projection expression part. This arrangement makes it much easier to express complex calculations.
===== Complex Calculations with SpEL expressions
Consider the following SpEL expression:
[source,java]
----
1 + (q + 1) / (q - 1)
----
The preceding expression is translated into the following projection expression part:
[source,javascript]
----
{ "$add" : [ 1, {
"$divide" : [ {
"$add":["$q", 1]}, {
"$subtract":[ "$q", 1]}
]
}]}
----
You can see examples in more context in <<mongo.aggregation.examples.example5>> and <<mongo.aggregation.examples.example6>>. You can find more usage examples for supported SpEL expression constructs in `SpelExpressionTransformerUnitTests`. The following table shows the SpEL transformations supported by Spring Data MongoDB:
.Supported SpEL transformations
[%header,cols="2"]
|===
| SpEL Expression
| Mongo Expression Part
| a == b
| { $eq : [$a, $b] }
| a != b
| { $ne : [$a , $b] }
| a > b
| { $gt : [$a, $b] }
| a >= b
| { $gte : [$a, $b] }
| a < b
| { $lt : [$a, $b] }
| a <= b
| { $lte : [$a, $b] }
| a + b
| { $add : [$a, $b] }
| a - b
| { $subtract : [$a, $b] }
| a * b
| { $multiply : [$a, $b] }
| a / b
| { $divide : [$a, $b] }
| a^b
| { $pow : [$a, $b] }
| a % b
| { $mod : [$a, $b] }
| a && b
| { $and : [$a, $b] }
| a \|\| b
| { $or : [$a, $b] }
| !a
| { $not : [$a] }
|===
In addition to the transformations shown in the preceding table, you can use standard SpEL operations such as `new` to (for example) create arrays and reference expressions through their names (followed by the arguments to use in brackets). The following example shows how to create an array in this fashion:
[source,java]
----
// { $setEquals : [$a, [5, 8, 13] ] }
.andExpression("setEquals(a, new int[]{5, 8, 13})");
----
[[mongo.aggregation.examples]]
==== Aggregation Framework Examples
The examples in this section demonstrate the usage patterns for the MongoDB Aggregation Framework with Spring Data MongoDB.
[[mongo.aggregation.examples.example1]]
===== Aggregation Framework Example 1
In this introductory example, we want to aggregate a list of tags to get the occurrence count of a particular tag from a MongoDB collection (called `tags`) sorted by the occurrence count in descending order. This example demonstrates the usage of grouping, sorting, projections (selection), and unwinding (result splitting).
[source,java]
----
class TagCount {
String tag;
int n;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
Aggregation agg = newAggregation(
project("tags"),
unwind("tags"),
group("tags").count().as("n"),
project("n").and("tag").previousOperation(),
sort(DESC, "n")
);
AggregationResults<TagCount> results = mongoTemplate.aggregate(agg, "tags", TagCount.class);
List<TagCount> tagCount = results.getMappedResults();
----
The preceding listing uses the following algorithm:
. Create a new aggregation by using the `newAggregation` static factory method, to which we pass a list of aggregation operations. These aggregate operations define the aggregation pipeline of our `Aggregation`.
. Use the `project` operation to select the `tags` field (which is an array of strings) from the input collection.
. Use the `unwind` operation to generate a new document for each tag within the `tags` array.
. Use the `group` operation to define a group for each `tags` value for which we aggregate the occurrence count (by using the `count` aggregation operator and collecting the result in a new field called `n`).
. Select the `n` field and create an alias for the ID field generated from the previous group operation (hence the call to `previousOperation()`) with a name of `tag`.
. Use the `sort` operation to sort the resulting list of tags by their occurrence count in descending order.
. Call the `aggregate` method on `MongoTemplate` to let MongoDB perform the actual aggregation operation, with the created `Aggregation` as an argument.
Note that the input collection is explicitly specified as the `tags` parameter to the `aggregate` Method. If the name of the input collection is not specified explicitly, it is derived from the input class passed as the first parameter to the `newAggreation` method.
[[mongo.aggregation.examples.example2]]
===== Aggregation Framework Example 2
This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#largest-and-smallest-cities-by-state[Largest and Smallest Cities by State] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return the smallest and largest cities by population for each state by using the aggregation framework. This example demonstrates grouping, sorting, and projections (selection).
[source,java]
----
class ZipInfo {
String id;
String city;
String state;
@Field("pop") int population;
@Field("loc") double[] location;
}
class City {
String name;
int population;
}
class ZipInfoStats {
String id;
String state;
City biggestCity;
City smallestCity;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<ZipInfo> aggregation = newAggregation(ZipInfo.class,
group("state", "city")
.sum("population").as("pop"),
sort(ASC, "pop", "state", "city"),
group("state")
.last("city").as("biggestCity")
.last("pop").as("biggestPop")
.first("city").as("smallestCity")
.first("pop").as("smallestPop"),
project()
.and("state").previousOperation()
.and("biggestCity")
.nested(bind("name", "biggestCity").and("population", "biggestPop"))
.and("smallestCity")
.nested(bind("name", "smallestCity").and("population", "smallestPop")),
sort(ASC, "state")
);
AggregationResults<ZipInfoStats> result = mongoTemplate.aggregate(aggregation, ZipInfoStats.class);
ZipInfoStats firstZipInfoStats = result.getMappedResults().get(0);
----
Note that the `ZipInfo` class maps the structure of the given input-collection. The `ZipInfoStats` class defines the structure in the desired output format.
The preceding listings use the following algorithm:
. Use the `group` operation to define a group from the input-collection. The grouping criteria is the combination of the `state` and `city` fields, which forms the ID structure of the group. We aggregate the value of the `population` property from the grouped elements by using the `sum` operator and save the result in the `pop` field.
. Use the `sort` operation to sort the intermediate-result by the `pop`, `state` and `city` fields, in ascending order, such that the smallest city is at the top and the biggest city is at the bottom of the result. Note that the sorting on `state` and `city` is implicitly performed against the group ID fields (which Spring Data MongoDB handled).
. Use a `group` operation again to group the intermediate result by `state`. Note that `state` again implicitly references a group ID field. We select the name and the population count of the biggest and smallest city with calls to the `last(…)` and `first(...)` operators, respectively, in the `project` operation.
. Select the `state` field from the previous `group` operation. Note that `state` again implicitly references a group ID field. Because we do not want an implicitly generated ID to appear, we exclude the ID from the previous operation by using `and(previousOperation()).exclude()`. Because we want to populate the nested `City` structures in our output class, we have to emit appropriate sub-documents by using the nested method.
. Sort the resulting list of `StateStats` by their state name in ascending order in the `sort` operation.
Note that we derive the name of the input collection from the `ZipInfo` class passed as the first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example3]]
===== Aggregation Framework Example 3
This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#states-with-populations-over-10-million[States with Populations Over 10 Million] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return all states with a population greater than 10 million, using the aggregation framework. This example demonstrates grouping, sorting, and matching (filtering).
[source,java]
----
class StateStats {
@Id String id;
String state;
@Field("totalPop") int totalPopulation;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<ZipInfo> agg = newAggregation(ZipInfo.class,
group("state").sum("population").as("totalPop"),
sort(ASC, previousOperation(), "totalPop"),
match(where("totalPop").gte(10 * 1000 * 1000))
);
AggregationResults<StateStats> result = mongoTemplate.aggregate(agg, StateStats.class);
List<StateStats> stateStatsList = result.getMappedResults();
----
The preceding listings use the following algorithm:
. Group the input collection by the `state` field and calculate the sum of the `population` field and store the result in the new field `"totalPop"`.
. Sort the intermediate result by the id-reference of the previous group operation in addition to the `"totalPop"` field in ascending order.
. Filter the intermediate result by using a `match` operation which accepts a `Criteria` query as an argument.
Note that we derive the name of the input collection from the `ZipInfo` class passed as first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example4]]
===== Aggregation Framework Example 4
This example demonstrates the use of simple arithmetic operations in the projection operation.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.and("netPrice").plus(1).as("netPricePlus1")
.and("netPrice").minus(1).as("netPriceMinus1")
.and("netPrice").multiply(1.19).as("grossPrice")
.and("netPrice").divide(2).as("netPriceDiv2")
.and("spaceUnits").mod(2).as("spaceUnitsMod2")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
Note that we derive the name of the input collection from the `Product` class passed as first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example5]]
===== Aggregation Framework Example 5
This example demonstrates the use of simple arithmetic operations derived from SpEL Expressions in the projection operation.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.andExpression("netPrice + 1").as("netPricePlus1")
.andExpression("netPrice - 1").as("netPriceMinus1")
.andExpression("netPrice / 2").as("netPriceDiv2")
.andExpression("netPrice * 1.19").as("grossPrice")
.andExpression("spaceUnits % 2").as("spaceUnitsMod2")
.andExpression("(netPrice * 0.8 + 1.2) * 1.19").as("grossPriceIncludingDiscountAndCharge")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
[[mongo.aggregation.examples.example6]]
===== Aggregation Framework Example 6
This example demonstrates the use of complex arithmetic operations derived from SpEL Expressions in the projection operation.
Note: The additional parameters passed to the `addExpression` method can be referenced with indexer expressions according to their position. In this example, we reference the first parameter of the parameters array with `[0]`. When the SpEL expression is transformed into a MongoDB aggregation framework expression, external parameter expressions are replaced with their respective values.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
double shippingCosts = 1.2;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.andExpression("(netPrice * (1-discountRate) + [0]) * (1+taxRate)", shippingCosts).as("salesPrice")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
Note that we can also refer to other fields of the document within the SpEL expression.
[[mongo.aggregation.examples.example7]]
===== Aggregation Framework Example 7
This example uses conditional projection. It is derived from the https://docs.mongodb.com/manual/reference/operator/aggregation/cond/[$cond reference documentation].
[source,java]
----
public class InventoryItem {
@Id int id;
String item;
String description;
int qty;
}
public class InventoryItemProjection {
@Id int id;
String item;
String description;
int qty;
int discount
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<InventoryItem> agg = newAggregation(InventoryItem.class,
project("item").and("discount")
.applyCondition(ConditionalOperator.newBuilder().when(Criteria.where("qty").gte(250))
.then(30)
.otherwise(20))
.and(ifNull("description", "Unspecified")).as("description")
);
AggregationResults<InventoryItemProjection> result = mongoTemplate.aggregate(agg, "inventory", InventoryItemProjection.class);
List<InventoryItemProjection> stateStatsList = result.getMappedResults();
----
This one-step aggregation uses a projection operation with the `inventory` collection. We project the `discount` field by using a conditional operation for all inventory items that have a `qty` greater than or equal to `250`. A second conditional projection is performed for the `description` field. We apply the `Unspecified` description to all items that either do not have a `description` field or items that have a `null` description.
As of MongoDB 3.6, it is possible to exclude fields from the projection by using a conditional expression.
.Conditional aggregation projection
====
[source,java]
----
TypedAggregation<Book> agg = Aggregation.newAggregation(Book.class,
project("title")
.and(ConditionalOperators.when(ComparisonOperators.valueOf("author.middle") <1>
.equalToValue("")) <2>
.then("$$REMOVE") <3>
.otherwiseValueOf("author.middle") <4>
)
.as("author.middle"));
----
<1> If the value of the field `author.middle`
<2> does not contain a value,
<3> then use https://docs.mongodb.com/manual/reference/aggregation-variables/#variable.REMOVE[``$$REMOVE``] to exclude the field.
<4> Otherwise, add the field value of `author.middle`.
====

View File

@@ -0,0 +1,115 @@
[[gridfs]]
== GridFS Support
MongoDB supports storing binary files inside its filesystem, GridFS. Spring Data MongoDB provides a `GridFsOperations` interface as well as the corresponding implementation, `GridFsTemplate`, to let you interact with the filesystem. You can set up a `GridFsTemplate` instance by handing it a `MongoDatabaseFactory` as well as a `MongoConverter`, as the following example shows:
.JavaConfig setup for a GridFsTemplate
====
[source,java]
----
class GridFsConfiguration extends AbstractMongoClientConfiguration {
// … further configuration omitted
@Bean
public GridFsTemplate gridFsTemplate() {
return new GridFsTemplate(mongoDbFactory(), mappingMongoConverter());
}
}
----
====
The corresponding XML configuration follows:
.XML configuration for a GridFsTemplate
====
[source,xml]
----
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:mongo="http://www.springframework.org/schema/data/mongo"
xsi:schemaLocation="http://www.springframework.org/schema/data/mongo
https://www.springframework.org/schema/data/mongo/spring-mongo.xsd
http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<mongo:db-factory id="mongoDbFactory" dbname="database" />
<mongo:mapping-converter id="converter" />
<bean class="org.springframework.data.mongodb.gridfs.GridFsTemplate">
<constructor-arg ref="mongoDbFactory" />
<constructor-arg ref="converter" />
</bean>
</beans>
----
====
The template can now be injected and used to perform storage and retrieval operations, as the following example shows:
.Using GridFsTemplate to store files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void storeFileToGridFs() {
FileMetadata metadata = new FileMetadata();
// populate metadata
Resource file = … // lookup File or Resource
operations.store(file.getInputStream(), "filename.txt", metadata);
}
}
----
====
The `store(…)` operations take an `InputStream`, a filename, and (optionally) metadata information about the file to store. The metadata can be an arbitrary object, which will be marshaled by the `MongoConverter` configured with the `GridFsTemplate`. Alternatively, you can also provide a `Document`.
You can read files from the filesystem through either the `find(…)` or the `getResources(…)` methods. Let's have a look at the `find(…)` methods first. You can either find a single file or multiple files that match a `Query`. You can use the `GridFsCriteria` helper class to define queries. It provides static factory methods to encapsulate default metadata fields (such as `whereFilename()` and `whereContentType()`) or a custom one through `whereMetaData()`. The following example shows how to use `GridFsTemplate` to query for files:
.Using GridFsTemplate to query for files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void findFilesInGridFs() {
GridFSFindIterable result = operations.find(query(whereFilename().is("filename.txt")))
}
}
----
====
NOTE: Currently, MongoDB does not support defining sort criteria when retrieving files from GridFS. For this reason, any sort criteria defined on the `Query` instance handed into the `find(…)` method are disregarded.
The other option to read files from the GridFs is to use the methods introduced by the `ResourcePatternResolver` interface. They allow handing an Ant path into the method and can thus retrieve files matching the given pattern. The following example shows how to use `GridFsTemplate` to read files:
.Using GridFsTemplate to read files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void readFilesFromGridFs() {
GridFsResources[] txtFiles = operations.getResources("*.txt");
}
}
----
====
`GridFsOperations` extends `ResourcePatternResolver` and lets the `GridFsTemplate` (for example) to be plugged into an `ApplicationContext` to read Spring Config files from MongoDB database.

View File

@@ -2338,656 +2338,7 @@ GroupByResults<XObject> results = mongoTemplate.group(where("x").gt(0),
keyFunction("classpath:keyFunction.js").initialDocument("{ count: 0 }").reduceFunction("classpath:groupReduce.js"), XObject.class);
----
[[mongo.aggregation]]
== Aggregation Framework Support
Spring Data MongoDB provides support for the Aggregation Framework introduced to MongoDB in version 2.2.
For further information, see the full https://docs.mongodb.org/manual/aggregation/[reference documentation] of the aggregation framework and other data aggregation tools for MongoDB.
[[mongo.aggregation.basic-concepts]]
=== Basic Concepts
The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions: `Aggregation`, `AggregationOperation`, and `AggregationResults`.
* `Aggregation`
+
An `Aggregation` represents a MongoDB `aggregate` operation and holds the description of the aggregation pipeline instructions. Aggregations are created by invoking the appropriate `newAggregation(…)` static factory method of the `Aggregation` class, which takes a list of `AggregateOperation` and an optional input class.
+
The actual aggregate operation is run by the `aggregate` method of the `MongoTemplate`, which takes the desired output class as a parameter.
+
* `TypedAggregation`
+
A `TypedAggregation`, just like an `Aggregation`, holds the instructions of the aggregation pipeline and a reference to the input type, that is used for mapping domain properties to actual document fields.
+
At runtime, field references get checked against the given input type, considering potential `@Field` annotations and raising errors when referencing nonexistent properties.
+
* `AggregationOperation`
+
An `AggregationOperation` represents a MongoDB aggregation pipeline operation and describes the processing that should be performed in this aggregation step. Although you could manually create an `AggregationOperation`, we recommend using the static factory methods provided by the `Aggregate` class to construct an `AggregateOperation`.
+
* `AggregationResults`
+
`AggregationResults` is the container for the result of an aggregate operation. It provides access to the raw aggregation result, in the form of a `Document` to the mapped objects and other information about the aggregation.
+
The following listing shows the canonical example for using the Spring Data MongoDB support for the MongoDB Aggregation Framework:
+
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
Aggregation agg = newAggregation(
pipelineOP1(),
pipelineOP2(),
pipelineOPn()
);
AggregationResults<OutputType> results = mongoTemplate.aggregate(agg, "INPUT_COLLECTION_NAME", OutputType.class);
List<OutputType> mappedResult = results.getMappedResults();
----
Note that, if you provide an input class as the first parameter to the `newAggregation` method, the `MongoTemplate` derives the name of the input collection from this class. Otherwise, if you do not not specify an input class, you must provide the name of the input collection explicitly. If both an input class and an input collection are provided, the latter takes precedence.
[[mongo.aggregation.supported-aggregation-operations]]
=== Supported Aggregation Operations
The MongoDB Aggregation Framework provides the following types of aggregation operations:
* Pipeline Aggregation Operators
* Group Aggregation Operators
* Boolean Aggregation Operators
* Comparison Aggregation Operators
* Arithmetic Aggregation Operators
* String Aggregation Operators
* Date Aggregation Operators
* Array Aggregation Operators
* Conditional Aggregation Operators
* Lookup Aggregation Operators
* Convert Aggregation Operators
* Object Aggregation Operators
* Script Aggregation Operators
At the time of this writing, we provide support for the following Aggregation Operations in Spring Data MongoDB:
.Aggregation Operations currently supported by Spring Data MongoDB
[cols="2*"]
|===
| Pipeline Aggregation Operators
| `bucket`, `bucketAuto`, `count`, `facet`, `geoNear`, `graphLookup`, `group`, `limit`, `lookup`, `match`, `project`, `replaceRoot`, `skip`, `sort`, `unwind`
| Set Aggregation Operators
| `setEquals`, `setIntersection`, `setUnion`, `setDifference`, `setIsSubset`, `anyElementTrue`, `allElementsTrue`
| Group Aggregation Operators
| `addToSet`, `first`, `last`, `max`, `min`, `avg`, `push`, `sum`, `(*count)`, `stdDevPop`, `stdDevSamp`
| Arithmetic Aggregation Operators
| `abs`, `add` (*via `plus`), `ceil`, `divide`, `exp`, `floor`, `ln`, `log`, `log10`, `mod`, `multiply`, `pow`, `round`, `sqrt`, `subtract` (*via `minus`), `trunc`
| String Aggregation Operators
| `concat`, `substr`, `toLower`, `toUpper`, `stcasecmp`, `indexOfBytes`, `indexOfCP`, `split`, `strLenBytes`, `strLenCP`, `substrCP`, `trim`, `ltrim`, `rtim`
| Comparison Aggregation Operators
| `eq` (*via: `is`), `gt`, `gte`, `lt`, `lte`, `ne`
| Array Aggregation Operators
| `arrayElementAt`, `arrayToObject`, `concatArrays`, `filter`, `in`, `indexOfArray`, `isArray`, `range`, `reverseArray`, `reduce`, `size`, `slice`, `zip`
| Literal Operators
| `literal`
| Date Aggregation Operators
| `dayOfYear`, `dayOfMonth`, `dayOfWeek`, `year`, `month`, `week`, `hour`, `minute`, `second`, `millisecond`, `dateToString`, `dateFromString`, `dateFromParts`, `dateToParts`, `isoDayOfWeek`, `isoWeek`, `isoWeekYear`
| Variable Operators
| `map`
| Conditional Aggregation Operators
| `cond`, `ifNull`, `switch`
| Type Aggregation Operators
| `type`
| Convert Aggregation Operators
| `convert`, `toBool`, `toDate`, `toDecimal`, `toDouble`, `toInt`, `toLong`, `toObjectId`, `toString`
| Object Aggregation Operators
| `objectToArray`, `mergeObjects`
| Script Aggregation Operators
| `function`, `accumulator`
|===
* The operation is mapped or added by Spring Data MongoDB.
Note that the aggregation operations not listed here are currently not supported by Spring Data MongoDB. Comparison aggregation operators are expressed as `Criteria` expressions.
[[mongo.aggregation.projection]]
=== Projection Expressions
Projection expressions are used to define the fields that are the outcome of a particular aggregation step. Projection expressions can be defined through the `project` method of the `Aggregation` class, either by passing a list of `String` objects or an aggregation framework `Fields` object. The projection can be extended with additional fields through a fluent API by using the `and(String)` method and aliased by using the `as(String)` method.
Note that you can also define fields with aliases by using the `Fields.field` static factory method of the aggregation framework, which you can then use to construct a new `Fields` instance. References to projected fields in later aggregation stages are valid only for the field names of included fields or their aliases (including newly defined fields and their aliases). Fields not included in the projection cannot be referenced in later aggregation stages. The following listings show examples of projection expression:
.Projection expression examples
====
[source,java]
----
// generates {$project: {name: 1, netPrice: 1}}
project("name", "netPrice")
// generates {$project: {thing1: $thing2}}
project().and("thing1").as("thing2")
// generates {$project: {a: 1, b: 1, thing2: $thing1}}
project("a","b").and("thing1").as("thing2")
----
====
.Multi-Stage Aggregation using Projection and Sorting
====
[source,java]
----
// generates {$project: {name: 1, netPrice: 1}}, {$sort: {name: 1}}
project("name", "netPrice"), sort(ASC, "name")
// generates {$project: {name: $firstname}}, {$sort: {name: 1}}
project().and("firstname").as("name"), sort(ASC, "name")
// does not work
project().and("firstname").as("name"), sort(ASC, "firstname")
----
====
More examples for project operations can be found in the `AggregationTests` class. Note that further details regarding the projection expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project[corresponding section] of the MongoDB Aggregation Framework reference documentation.
[[mongo.aggregation.facet]]
=== Faceted Classification
As of Version 3.4, MongoDB supports faceted classification by using the Aggregation Framework. A faceted classification uses semantic categories (either general or subject-specific) that are combined to create the full classification entry. Documents flowing through the aggregation pipeline are classified into buckets. A multi-faceted classification enables various aggregations on the same set of input documents, without needing to retrieve the input documents multiple times.
==== Buckets
Bucket operations categorize incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. Bucket operations require a grouping field or a grouping expression. You can define them by using the `bucket()` and `bucketAuto()` methods of the `Aggregate` class. `BucketOperation` and `BucketAutoOperation` can expose accumulations based on aggregation expressions for input documents. You can extend the bucket operation with additional parameters through a fluent API by using the `with…()` methods and the `andOutput(String)` method. You can alias the operation by using the `as(String)` method. Each bucket is represented as a document in the output.
`BucketOperation` takes a defined set of boundaries to group incoming documents into these categories. Boundaries are required to be sorted. The following listing shows some examples of bucket operations:
.Bucket operation examples
====
[source,java]
----
// generates {$bucket: {groupBy: $price, boundaries: [0, 100, 400]}}
bucket("price").withBoundaries(0, 100, 400);
// generates {$bucket: {groupBy: $price, default: "Other" boundaries: [0, 100]}}
bucket("price").withBoundaries(0, 100).withDefault("Other");
// generates {$bucket: {groupBy: $price, boundaries: [0, 100], output: { count: { $sum: 1}}}}
bucket("price").withBoundaries(0, 100).andOutputCount().as("count");
// generates {$bucket: {groupBy: $price, boundaries: [0, 100], 5, output: { titles: { $push: "$title"}}}
bucket("price").withBoundaries(0, 100).andOutput("title").push().as("titles");
----
====
`BucketAutoOperation` determines boundaries in an attempt to evenly distribute documents into a specified number of buckets. `BucketAutoOperation` optionally takes a granularity value that specifies the https://en.wikipedia.org/wiki/Preferred_number[preferred number] series to use to ensure that the calculated boundary edges end on preferred round numbers or on powers of 10. The following listing shows examples of bucket operations:
.Bucket operation examples
====
[source,java]
----
// generates {$bucketAuto: {groupBy: $price, buckets: 5}}
bucketAuto("price", 5)
// generates {$bucketAuto: {groupBy: $price, buckets: 5, granularity: "E24"}}
bucketAuto("price", 5).withGranularity(Granularities.E24).withDefault("Other");
// generates {$bucketAuto: {groupBy: $price, buckets: 5, output: { titles: { $push: "$title"}}}
bucketAuto("price", 5).andOutput("title").push().as("titles");
----
====
To create output fields in buckets, bucket operations can use `AggregationExpression` through `andOutput()` and <<mongo.aggregation.projection.expressions, SpEL expressions>> through `andOutputExpression()`.
Note that further details regarding bucket expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/bucket/[`$bucket` section] and
https://docs.mongodb.org/manual/reference/operator/aggregation/bucketAuto/[`$bucketAuto` section] of the MongoDB Aggregation Framework reference documentation.
==== Multi-faceted Aggregation
Multiple aggregation pipelines can be used to create multi-faceted aggregations that characterize data across multiple dimensions (or facets) within a single aggregation stage. Multi-faceted aggregations provide multiple filters and categorizations to guide data browsing and analysis. A common implementation of faceting is how many online retailers provide ways to narrow down search results by applying filters on product price, manufacturer, size, and other factors.
You can define a `FacetOperation` by using the `facet()` method of the `Aggregation` class. You can customize it with multiple aggregation pipelines by using the `and()` method. Each sub-pipeline has its own field in the output document where its results are stored as an array of documents.
Sub-pipelines can project and filter input documents prior to grouping. Common use cases include extraction of date parts or calculations before categorization. The following listing shows facet operation examples:
.Facet operation examples
====
[source,java]
----
// generates {$facet: {categorizedByPrice: [ { $match: { price: {$exists : true}}}, { $bucketAuto: {groupBy: $price, buckets: 5}}]}}
facet(match(Criteria.where("price").exists(true)), bucketAuto("price", 5)).as("categorizedByPrice"))
// generates {$facet: {categorizedByCountry: [ { $match: { country: {$exists : true}}}, { $sortByCount: "$country"}]}}
facet(match(Criteria.where("country").exists(true)), sortByCount("country")).as("categorizedByCountry"))
// generates {$facet: {categorizedByYear: [
// { $project: { title: 1, publicationYear: { $year: "publicationDate"}}},
// { $bucketAuto: {groupBy: $price, buckets: 5, output: { titles: {$push:"$title"}}}
// ]}}
facet(project("title").and("publicationDate").extractYear().as("publicationYear"),
bucketAuto("publicationYear", 5).andOutput("title").push().as("titles"))
.as("categorizedByYear"))
----
====
Note that further details regarding facet operation can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/facet/[`$facet` section] of the MongoDB Aggregation Framework reference documentation.
[[mongo.aggregation.sort-by-count]]
==== Sort By Count
Sort by count operations group incoming documents based on the value of a specified expression, compute the count of documents in each distinct group, and sort the results by count. It offers a handy shortcut to apply sorting when using <<mongo.aggregation.facet>>. Sort by count operations require a grouping field or grouping expression. The following listing shows a sort by count example:
.Sort by count example
====
[source,java]
----
// generates { $sortByCount: "$country" }
sortByCount("country");
----
====
A sort by count operation is equivalent to the following BSON (Binary JSON):
----
{ $group: { _id: <expression>, count: { $sum: 1 } } },
{ $sort: { count: -1 } }
----
[[mongo.aggregation.projection.expressions]]
==== Spring Expression Support in Projection Expressions
We support the use of SpEL expressions in projection expressions through the `andExpression` method of the `ProjectionOperation` and `BucketOperation` classes. This feature lets you define the desired expression as a SpEL expression. On running a query, the SpEL expression is translated into a corresponding MongoDB projection expression part. This arrangement makes it much easier to express complex calculations.
===== Complex Calculations with SpEL expressions
Consider the following SpEL expression:
[source,java]
----
1 + (q + 1) / (q - 1)
----
The preceding expression is translated into the following projection expression part:
[source,javascript]
----
{ "$add" : [ 1, {
"$divide" : [ {
"$add":["$q", 1]}, {
"$subtract":[ "$q", 1]}
]
}]}
----
You can see examples in more context in <<mongo.aggregation.examples.example5>> and <<mongo.aggregation.examples.example6>>. You can find more usage examples for supported SpEL expression constructs in `SpelExpressionTransformerUnitTests`. The following table shows the SpEL transformations supported by Spring Data MongoDB:
.Supported SpEL transformations
[%header,cols="2"]
|===
| SpEL Expression
| Mongo Expression Part
| a == b
| { $eq : [$a, $b] }
| a != b
| { $ne : [$a , $b] }
| a > b
| { $gt : [$a, $b] }
| a >= b
| { $gte : [$a, $b] }
| a < b
| { $lt : [$a, $b] }
| a <= b
| { $lte : [$a, $b] }
| a + b
| { $add : [$a, $b] }
| a - b
| { $subtract : [$a, $b] }
| a * b
| { $multiply : [$a, $b] }
| a / b
| { $divide : [$a, $b] }
| a^b
| { $pow : [$a, $b] }
| a % b
| { $mod : [$a, $b] }
| a && b
| { $and : [$a, $b] }
| a \|\| b
| { $or : [$a, $b] }
| !a
| { $not : [$a] }
|===
In addition to the transformations shown in the preceding table, you can use standard SpEL operations such as `new` to (for example) create arrays and reference expressions through their names (followed by the arguments to use in brackets). The following example shows how to create an array in this fashion:
[source,java]
----
// { $setEquals : [$a, [5, 8, 13] ] }
.andExpression("setEquals(a, new int[]{5, 8, 13})");
----
[[mongo.aggregation.examples]]
==== Aggregation Framework Examples
The examples in this section demonstrate the usage patterns for the MongoDB Aggregation Framework with Spring Data MongoDB.
[[mongo.aggregation.examples.example1]]
===== Aggregation Framework Example 1
In this introductory example, we want to aggregate a list of tags to get the occurrence count of a particular tag from a MongoDB collection (called `tags`) sorted by the occurrence count in descending order. This example demonstrates the usage of grouping, sorting, projections (selection), and unwinding (result splitting).
[source,java]
----
class TagCount {
String tag;
int n;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
Aggregation agg = newAggregation(
project("tags"),
unwind("tags"),
group("tags").count().as("n"),
project("n").and("tag").previousOperation(),
sort(DESC, "n")
);
AggregationResults<TagCount> results = mongoTemplate.aggregate(agg, "tags", TagCount.class);
List<TagCount> tagCount = results.getMappedResults();
----
The preceding listing uses the following algorithm:
. Create a new aggregation by using the `newAggregation` static factory method, to which we pass a list of aggregation operations. These aggregate operations define the aggregation pipeline of our `Aggregation`.
. Use the `project` operation to select the `tags` field (which is an array of strings) from the input collection.
. Use the `unwind` operation to generate a new document for each tag within the `tags` array.
. Use the `group` operation to define a group for each `tags` value for which we aggregate the occurrence count (by using the `count` aggregation operator and collecting the result in a new field called `n`).
. Select the `n` field and create an alias for the ID field generated from the previous group operation (hence the call to `previousOperation()`) with a name of `tag`.
. Use the `sort` operation to sort the resulting list of tags by their occurrence count in descending order.
. Call the `aggregate` method on `MongoTemplate` to let MongoDB perform the actual aggregation operation, with the created `Aggregation` as an argument.
Note that the input collection is explicitly specified as the `tags` parameter to the `aggregate` Method. If the name of the input collection is not specified explicitly, it is derived from the input class passed as the first parameter to the `newAggreation` method.
[[mongo.aggregation.examples.example2]]
===== Aggregation Framework Example 2
This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#largest-and-smallest-cities-by-state[Largest and Smallest Cities by State] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return the smallest and largest cities by population for each state by using the aggregation framework. This example demonstrates grouping, sorting, and projections (selection).
[source,java]
----
class ZipInfo {
String id;
String city;
String state;
@Field("pop") int population;
@Field("loc") double[] location;
}
class City {
String name;
int population;
}
class ZipInfoStats {
String id;
String state;
City biggestCity;
City smallestCity;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<ZipInfo> aggregation = newAggregation(ZipInfo.class,
group("state", "city")
.sum("population").as("pop"),
sort(ASC, "pop", "state", "city"),
group("state")
.last("city").as("biggestCity")
.last("pop").as("biggestPop")
.first("city").as("smallestCity")
.first("pop").as("smallestPop"),
project()
.and("state").previousOperation()
.and("biggestCity")
.nested(bind("name", "biggestCity").and("population", "biggestPop"))
.and("smallestCity")
.nested(bind("name", "smallestCity").and("population", "smallestPop")),
sort(ASC, "state")
);
AggregationResults<ZipInfoStats> result = mongoTemplate.aggregate(aggregation, ZipInfoStats.class);
ZipInfoStats firstZipInfoStats = result.getMappedResults().get(0);
----
Note that the `ZipInfo` class maps the structure of the given input-collection. The `ZipInfoStats` class defines the structure in the desired output format.
The preceding listings use the following algorithm:
. Use the `group` operation to define a group from the input-collection. The grouping criteria is the combination of the `state` and `city` fields, which forms the ID structure of the group. We aggregate the value of the `population` property from the grouped elements by using the `sum` operator and save the result in the `pop` field.
. Use the `sort` operation to sort the intermediate-result by the `pop`, `state` and `city` fields, in ascending order, such that the smallest city is at the top and the biggest city is at the bottom of the result. Note that the sorting on `state` and `city` is implicitly performed against the group ID fields (which Spring Data MongoDB handled).
. Use a `group` operation again to group the intermediate result by `state`. Note that `state` again implicitly references a group ID field. We select the name and the population count of the biggest and smallest city with calls to the `last(…)` and `first(...)` operators, respectively, in the `project` operation.
. Select the `state` field from the previous `group` operation. Note that `state` again implicitly references a group ID field. Because we do not want an implicitly generated ID to appear, we exclude the ID from the previous operation by using `and(previousOperation()).exclude()`. Because we want to populate the nested `City` structures in our output class, we have to emit appropriate sub-documents by using the nested method.
. Sort the resulting list of `StateStats` by their state name in ascending order in the `sort` operation.
Note that we derive the name of the input collection from the `ZipInfo` class passed as the first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example3]]
===== Aggregation Framework Example 3
This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#states-with-populations-over-10-million[States with Populations Over 10 Million] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return all states with a population greater than 10 million, using the aggregation framework. This example demonstrates grouping, sorting, and matching (filtering).
[source,java]
----
class StateStats {
@Id String id;
String state;
@Field("totalPop") int totalPopulation;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<ZipInfo> agg = newAggregation(ZipInfo.class,
group("state").sum("population").as("totalPop"),
sort(ASC, previousOperation(), "totalPop"),
match(where("totalPop").gte(10 * 1000 * 1000))
);
AggregationResults<StateStats> result = mongoTemplate.aggregate(agg, StateStats.class);
List<StateStats> stateStatsList = result.getMappedResults();
----
The preceding listings use the following algorithm:
. Group the input collection by the `state` field and calculate the sum of the `population` field and store the result in the new field `"totalPop"`.
. Sort the intermediate result by the id-reference of the previous group operation in addition to the `"totalPop"` field in ascending order.
. Filter the intermediate result by using a `match` operation which accepts a `Criteria` query as an argument.
Note that we derive the name of the input collection from the `ZipInfo` class passed as first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example4]]
===== Aggregation Framework Example 4
This example demonstrates the use of simple arithmetic operations in the projection operation.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.and("netPrice").plus(1).as("netPricePlus1")
.and("netPrice").minus(1).as("netPriceMinus1")
.and("netPrice").multiply(1.19).as("grossPrice")
.and("netPrice").divide(2).as("netPriceDiv2")
.and("spaceUnits").mod(2).as("spaceUnitsMod2")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
Note that we derive the name of the input collection from the `Product` class passed as first parameter to the `newAggregation` method.
[[mongo.aggregation.examples.example5]]
===== Aggregation Framework Example 5
This example demonstrates the use of simple arithmetic operations derived from SpEL Expressions in the projection operation.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.andExpression("netPrice + 1").as("netPricePlus1")
.andExpression("netPrice - 1").as("netPriceMinus1")
.andExpression("netPrice / 2").as("netPriceDiv2")
.andExpression("netPrice * 1.19").as("grossPrice")
.andExpression("spaceUnits % 2").as("spaceUnitsMod2")
.andExpression("(netPrice * 0.8 + 1.2) * 1.19").as("grossPriceIncludingDiscountAndCharge")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
[[mongo.aggregation.examples.example6]]
===== Aggregation Framework Example 6
This example demonstrates the use of complex arithmetic operations derived from SpEL Expressions in the projection operation.
Note: The additional parameters passed to the `addExpression` method can be referenced with indexer expressions according to their position. In this example, we reference the first parameter of the parameters array with `[0]`. When the SpEL expression is transformed into a MongoDB aggregation framework expression, external parameter expressions are replaced with their respective values.
[source,java]
----
class Product {
String id;
String name;
double netPrice;
int spaceUnits;
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
double shippingCosts = 1.2;
TypedAggregation<Product> agg = newAggregation(Product.class,
project("name", "netPrice")
.andExpression("(netPrice * (1-discountRate) + [0]) * (1+taxRate)", shippingCosts).as("salesPrice")
);
AggregationResults<Document> result = mongoTemplate.aggregate(agg, Document.class);
List<Document> resultList = result.getMappedResults();
----
Note that we can also refer to other fields of the document within the SpEL expression.
[[mongo.aggregation.examples.example7]]
===== Aggregation Framework Example 7
This example uses conditional projection. It is derived from the https://docs.mongodb.com/manual/reference/operator/aggregation/cond/[$cond reference documentation].
[source,java]
----
public class InventoryItem {
@Id int id;
String item;
String description;
int qty;
}
public class InventoryItemProjection {
@Id int id;
String item;
String description;
int qty;
int discount
}
----
[source,java]
----
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<InventoryItem> agg = newAggregation(InventoryItem.class,
project("item").and("discount")
.applyCondition(ConditionalOperator.newBuilder().when(Criteria.where("qty").gte(250))
.then(30)
.otherwise(20))
.and(ifNull("description", "Unspecified")).as("description")
);
AggregationResults<InventoryItemProjection> result = mongoTemplate.aggregate(agg, "inventory", InventoryItemProjection.class);
List<InventoryItemProjection> stateStatsList = result.getMappedResults();
----
This one-step aggregation uses a projection operation with the `inventory` collection. We project the `discount` field by using a conditional operation for all inventory items that have a `qty` greater than or equal to `250`. A second conditional projection is performed for the `description` field. We apply the `Unspecified` description to all items that either do not have a `description` field or items that have a `null` description.
As of MongoDB 3.6, it is possible to exclude fields from the projection by using a conditional expression.
.Conditional aggregation projection
====
[source,java]
----
TypedAggregation<Book> agg = Aggregation.newAggregation(Book.class,
project("title")
.and(ConditionalOperators.when(ComparisonOperators.valueOf("author.middle") <1>
.equalToValue("")) <2>
.then("$$REMOVE") <3>
.otherwiseValueOf("author.middle") <4>
)
.as("author.middle"));
----
<1> If the value of the field `author.middle`
<2> does not contain a value,
<3> then use https://docs.mongodb.com/manual/reference/aggregation-variables/#variable.REMOVE[``$$REMOVE``] to exclude the field.
<4> Otherwise, add the field value of `author.middle`.
====
include::aggregation-framework.adoc[]
[[mongo-template.index-and-collections]]
== Index and Collection Management
@@ -3180,121 +2531,6 @@ boolean hasIndex = template.execute("geolocation", new CollectionCallbackBoolean
});
----
[[gridfs]]
== GridFS Support
MongoDB supports storing binary files inside its filesystem, GridFS. Spring Data MongoDB provides a `GridFsOperations` interface as well as the corresponding implementation, `GridFsTemplate`, to let you interact with the filesystem. You can set up a `GridFsTemplate` instance by handing it a `MongoDatabaseFactory` as well as a `MongoConverter`, as the following example shows:
.JavaConfig setup for a GridFsTemplate
====
[source,java]
----
class GridFsConfiguration extends AbstractMongoClientConfiguration {
// … further configuration omitted
@Bean
public GridFsTemplate gridFsTemplate() {
return new GridFsTemplate(mongoDbFactory(), mappingMongoConverter());
}
}
----
====
The corresponding XML configuration follows:
.XML configuration for a GridFsTemplate
====
[source,xml]
----
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:mongo="http://www.springframework.org/schema/data/mongo"
xsi:schemaLocation="http://www.springframework.org/schema/data/mongo
https://www.springframework.org/schema/data/mongo/spring-mongo.xsd
http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<mongo:db-factory id="mongoDbFactory" dbname="database" />
<mongo:mapping-converter id="converter" />
<bean class="org.springframework.data.mongodb.gridfs.GridFsTemplate">
<constructor-arg ref="mongoDbFactory" />
<constructor-arg ref="converter" />
</bean>
</beans>
----
====
The template can now be injected and used to perform storage and retrieval operations, as the following example shows:
.Using GridFsTemplate to store files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void storeFileToGridFs() {
FileMetadata metadata = new FileMetadata();
// populate metadata
Resource file = … // lookup File or Resource
operations.store(file.getInputStream(), "filename.txt", metadata);
}
}
----
====
The `store(…)` operations take an `InputStream`, a filename, and (optionally) metadata information about the file to store. The metadata can be an arbitrary object, which will be marshaled by the `MongoConverter` configured with the `GridFsTemplate`. Alternatively, you can also provide a `Document`.
You can read files from the filesystem through either the `find(…)` or the `getResources(…)` methods. Let's have a look at the `find(…)` methods first. You can either find a single file or multiple files that match a `Query`. You can use the `GridFsCriteria` helper class to define queries. It provides static factory methods to encapsulate default metadata fields (such as `whereFilename()` and `whereContentType()`) or a custom one through `whereMetaData()`. The following example shows how to use `GridFsTemplate` to query for files:
.Using GridFsTemplate to query for files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void findFilesInGridFs() {
GridFSFindIterable result = operations.find(query(whereFilename().is("filename.txt")))
}
}
----
====
NOTE: Currently, MongoDB does not support defining sort criteria when retrieving files from GridFS. For this reason, any sort criteria defined on the `Query` instance handed into the `find(…)` method are disregarded.
The other option to read files from the GridFs is to use the methods introduced by the `ResourcePatternResolver` interface. They allow handing an Ant path into the method and can thus retrieve files matching the given pattern. The following example shows how to use `GridFsTemplate` to read files:
.Using GridFsTemplate to read files
====
[source,java]
----
class GridFsClient {
@Autowired
GridFsOperations operations;
@Test
public void readFilesFromGridFs() {
GridFsResources[] txtFiles = operations.getResources("*.txt");
}
}
----
====
`GridFsOperations` extends `ResourcePatternResolver` and lets the `GridFsTemplate` (for example) to be plugged into an `ApplicationContext` to read Spring Config files from MongoDB database.
include::gridfs.adoc[]
include::tailable-cursors.adoc[]
include::change-streams.adoc[]

View File

@@ -1,6 +1,23 @@
Spring Data MongoDB Changelog
=============================
Changes in version 3.1.11 (2021-07-16)
--------------------------------------
* #3689 - Fix Regression in generating queries with nested maps with numeric keys.
* #3688 - Multiple maps with numeric keys in a single update produces the wrong query (Regression).
Changes in version 3.2.2 (2021-06-22)
-------------------------------------
* #3677 - Add missing double quote to GeoJson.java JSDoc header.
* #3668 - Projection on the _id field returns wrong result when using `@MongoId` (MongoDB 4.4).
* #3666 - Documentation references outdated `Mongo` client.
* #3660 - MappingMongoConverter problem: ConversionContext#convert does not try to use custom converters first.
* #3659 - [3.2.1] Indexing Class with Custom Converter -> Couldn't find PersistentEntity for property private [...].
* #3635 - $floor isOrOrNor() return true.
* #3633 - NPE in QueryMapper when use Query with `null` as value.
Changes in version 3.1.10 (2021-06-22)
--------------------------------------
* #3677 - Add missing double quote to GeoJson.java JSDoc header.
@@ -3451,6 +3468,8 @@ Repository

View File

@@ -1,4 +1,4 @@
Spring Data MongoDB 3.1.10 (2020.0.10)
Spring Data MongoDB 3.1.13 (2020.0.13)
Copyright (c) [2010-2019] Pivotal Software, Inc.
This product is licensed to you under the Apache License, Version 2.0 (the "License").
@@ -26,6 +26,9 @@ conditions of the subcomponent's license, as noted in the LICENSE file.