Details
-
Improvement
-
Status: Open
-
P3
-
Resolution: Unresolved
-
2.12.0
-
None
-
None
Description
When a read is executed over a collection that exceed the memory limit of 104857600 an exception occurs. This is declared by mongodb and is possible to control the error passing a AggregationOptions allowDiskUse true so mongo can sort with disk usage.
This should be happen only when aggregations are added to read but now is happening even without aggregation at all.
Please let me know how can help with this improvement / bug.
PCollection<KV<String, Document>> updateColls = p.apply("Reading Ops Collection: " + key, MongoDbIO .read() .withUri(options.getMongoDBUri()) .withDatabase("local") .withCollection("oplog.rs") .withBucketAuto(true) // .withQueryFn( // FindQuery.create().withFilters( // Filters.and( // Filters.gt("ts", ts.format(dtf)), // Filters.eq("ns", options.getMongoDBDBName() + "" + key), // Filters.eq("op", "u") // ) // ) // // AggregationQuery.create().withMongoDbPipeline(updatedDocsOplogAggregation) // ) )