Insert explicit ids for headers

This commit is contained in:
Christoph Strobl
2023-08-31 18:22:09 +02:00
parent 4f71a78302
commit 1749f9b485
9 changed files with 20 additions and 0 deletions

View File

@@ -1,3 +1,4 @@
[[spring-data-mongodb-reference-documentation]]
= Spring Data MongoDB - Reference Documentation
Mark Pollack; Thomas Risberg; Oliver Gierke; Costin Leau; Jon Brisbin; Thomas Darimont; Christoph Strobl; Mark Paluch; Jay Bryant
:revnumber: {version}

View File

@@ -56,6 +56,7 @@ Note that, if you provide an input class as the first parameter to the `newAggre
The MongoDB Aggregation Framework provides the following types of aggregation stages and operations:
[[aggregation-stages]]
==== Aggregation Stages
* addFields - `AddFieldsOperation`
@@ -102,6 +103,7 @@ Aggregation.stage("""
----
====
[[aggregation-operators]]
==== Aggregation Operators
* Group/Accumulator Aggregation Operators
@@ -213,6 +215,7 @@ More examples for project operations can be found in the `AggregationTests` clas
As of Version 3.4, MongoDB supports faceted classification by using the Aggregation Framework. A faceted classification uses semantic categories (either general or subject-specific) that are combined to create the full classification entry. Documents flowing through the aggregation pipeline are classified into buckets. A multi-faceted classification enables various aggregations on the same set of input documents, without needing to retrieve the input documents multiple times.
[[buckets]]
==== Buckets
Bucket operations categorize incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. Bucket operations require a grouping field or a grouping expression. You can define them by using the `bucket()` and `bucketAuto()` methods of the `Aggregate` class. `BucketOperation` and `BucketAutoOperation` can expose accumulations based on aggregation expressions for input documents. You can extend the bucket operation with additional parameters through a fluent API by using the `with…()` methods and the `andOutput(String)` method. You can alias the operation by using the `as(String)` method. Each bucket is represented as a document in the output.
@@ -259,6 +262,7 @@ To create output fields in buckets, bucket operations can use `AggregationExpres
Note that further details regarding bucket expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/bucket/[`$bucket` section] and
https://docs.mongodb.org/manual/reference/operator/aggregation/bucketAuto/[`$bucketAuto` section] of the MongoDB Aggregation Framework reference documentation.
[[multi-faceted-aggregation]]
==== Multi-faceted Aggregation
Multiple aggregation pipelines can be used to create multi-faceted aggregations that characterize data across multiple dimensions (or facets) within a single aggregation stage. Multi-faceted aggregations provide multiple filters and categorizations to guide data browsing and analysis. A common implementation of faceting is how many online retailers provide ways to narrow down search results by applying filters on product price, manufacturer, size, and other factors.
@@ -315,6 +319,7 @@ A sort by count operation is equivalent to the following BSON (Binary JSON):
We support the use of SpEL expressions in projection expressions through the `andExpression` method of the `ProjectionOperation` and `BucketOperation` classes. This feature lets you define the desired expression as a SpEL expression. On running a query, the SpEL expression is translated into a corresponding MongoDB projection expression part. This arrangement makes it much easier to express complex calculations.
[[complex-calculations-with-spel-expressions]]
===== Complex Calculations with SpEL expressions
Consider the following SpEL expression:

View File

@@ -12,6 +12,7 @@ changes from all collections within the database. When subscribing to a database
suitable type for the event type as conversion might not apply correctly across different entity types.
In doubt, use `Document`.
[[change-streams-with-messagelistener]]
=== Change Streams with `MessageListener`
Listening to a https://docs.mongodb.com/manual/tutorial/change-streams-example/[Change Stream by using a Sync Driver] creates a long running, blocking task that needs to be delegated to a separate component.
@@ -49,6 +50,7 @@ Errors while processing are passed on to an `org.springframework.util.ErrorHandl
Please use `register(request, body, errorHandler)` to provide additional functionality.
====
[[reactive-change-streams]]
=== Reactive Change Streams
Subscribing to Change Streams with the reactive API is a more natural approach to work with streams. Still, the essential building blocks, such as `ChangeStreamOptions`, remain the same. The following example shows how to use Change Streams emitting ``ChangeStreamEvent``s:
@@ -67,6 +69,7 @@ Flux<ChangeStreamEvent<User>> flux = reactiveTemplate.changeStream(User.class) <
<3> Obtain a `Flux` of change stream events. The `ChangeStreamEvent#getBody()` is converted to the requested domain type from (2).
====
[[resuming-change-streams]]
=== Resuming Change Streams
Change Streams can be resumed and resume emitting events where you left. To resume the stream, you need to supply either a resume

View File

@@ -1,6 +1,7 @@
[[introduction]]
= Introduction
[[document-structure]]
== Document Structure
This part of the reference documentation explains the core functionality offered by Spring Data MongoDB.

View File

@@ -6,6 +6,7 @@ This chapter coverts major changes and outlines migration steps.
[[migrating-2.x-to-3.0]]
== 2.x to 3.0
[[dependency-changes]]
=== Dependency Changes
* `org.mongodb:mongo-java-driver` (uber jar) got replaced with:
@@ -16,6 +17,7 @@ This chapter coverts major changes and outlines migration steps.
The change in dependencies allows usage of the reactive support without having to pull the synchronous driver.
NOTE: The new sync driver does no longer support `com.mongodb.DBObject`. Please use `org.bson.Document` instead.
[[signature-changes]]
=== Signature Changes
* `MongoTemplate` no longer supports `com.mongodb.MongoClient` and `com.mongodb.MongoClientOptions`.
@@ -23,6 +25,7 @@ Please use `com.mongodb.client.MongoClient` and `com.mongodb.MongoClientSettings
In case you're using `AbstractMongoConfiguration` please switch to `AbstractMongoClientConfiguration`.
[[namespace-changes]]
=== Namespace Changes
The switch to `com.mongodb.client.MongoClient` requires an update of your configuration XML if you have one.

View File

@@ -84,10 +84,12 @@ WARNING: Dot notation (such as `registerConverter(Person.class, "address.street"
The preceding sections outlined the purpose an overall structure of `PropertyValueConverters`.
This section focuses on MongoDB specific aspects.
[[mongovalueconverter-and-mongoconversioncontext]]
==== MongoValueConverter and MongoConversionContext
`MongoValueConverter` offers a pre-typed `PropertyValueConverter` interface that uses `MongoConversionContext`.
[[mongocustomconversions-configuration]]
==== MongoCustomConversions configuration
By default, `MongoCustomConversions` can handle declarative value converters, depending on the configured `PropertyValueConverterFactory`.

View File

@@ -437,6 +437,7 @@ Rather, `metric` refers to the concept of a system of measurement, regardless of
NOTE: Using `@GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)` on the target property forces usage of the `$nearSphere` operator.
[[geo-near-queries]]
==== Geo-near Queries
Spring Data MongoDb supports geo-near queries, as the following example shows:

View File

@@ -742,6 +742,7 @@ mongoTemplate.save(sample);
Spring Data MongoDB stores the type information as the last field for the actual root class as well as for the nested type (because it is complex and a subtype of `Contact`).So, if you now use `mongoTemplate.findAll(Object.class, "sample")`, you can find out that the document stored is a `Sample` instance.You can also find out that the value property is actually a `Person`.
[[customizing-type-mapping]]
==== Customizing Type Mapping
If you want to avoid writing the entire Java class name as type information but would rather like to use a key, you can use the `@TypeAlias` annotation on the entity class.If you need to customize the mapping even more, have a look at the `TypeInformationMapper` interface.An instance of that interface can be configured at the `DefaultMongoTypeMapper`, which can, in turn, be configured on `MappingMongoConverter`.The following example shows how to define a type alias for an entity:
@@ -780,6 +781,7 @@ class AppConfig extends AbstractMongoClientConfiguration {
----
====
[[configuring-custom-type-mapping]]
==== Configuring Custom Type Mapping
The following example shows how to configure a custom `MongoTypeMapper` in `MappingMongoConverter`:
@@ -1862,6 +1864,7 @@ The next major version (`4.0`) will register both, ``JsonDeserializer``s and ``J
Since version 2.6 of MongoDB, you can run full-text queries by using the `$text` operator. Methods and operations specific to full-text queries are available in `TextQuery` and `TextCriteria`. When doing full text search, see the https://docs.mongodb.org/manual/reference/operator/query/text/#behavior[MongoDB reference] for its behavior and limitations.
[[full-text-search]]
==== Full-text Search
Before you can actually use full-text search, you must set up the search index correctly. See <<mapping-usage-indexes.text-index,Text Index>> for more detail on how to create index structures. The following example shows how to set up a full-text search:

View File

@@ -193,6 +193,7 @@ Using a `Distance` with a `Metric` causes a `$nearSphere` (instead of a plain `$
NOTE: Using `@GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)` on the target property forces usage of `$nearSphere` operator.
[[geo-near-queries]]
==== Geo-near Queries
Spring Data MongoDB supports geo-near queries, as the following example shows: