T - public final class MongoEntityRDD<T extends IDeepType>
extends org.apache.spark.rdd.DeepMongoRDD<T>
| Constructor and Description |
|---|
MongoEntityRDD(org.apache.spark.SparkContext sc,
IMongoDeepJobConfig<T> config)
Public constructor that builds a new MongoEntityRDD RDD given the context and the configuration file.
|
| Modifier and Type | Method and Description |
|---|---|
static <T extends IDeepType> |
saveEntity(org.apache.spark.api.java.JavaRDD<T> rdd,
IMongoDeepJobConfig<T> config)
Save a RDD to MongoDB
|
T |
transformElement(scala.Tuple2<Object,org.bson.BSONObject> tuple) |
compute, getConf, getPartitions, getPreferredLocations, jobId, newJobContext, newTaskAttemptContext, newTaskAttemptID, org$apache$spark$rdd$DeepMongoRDD$$confBroadcast, org$apache$spark$rdd$DeepMongoRDD$$jobTrackerId$plus$plus, aggregate, cache, cartesian, checkpoint, checkpointData_$eq, checkpointData, clearDependencies, coalesce, coalesce$default$2, coalesce$default$3, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApprox$default$2, countApproxDistinct, countApproxDistinct$default$1, countByValue, countByValue$default$1, countByValueApprox, countByValueApprox$default$2, countByValueApprox$default$3, creationSiteInfo, dependencies, distinct, distinct, distinct$default$2, doCheckpoint, elementClassTag, filter, filterWith, first, firstParent, flatMap, flatMapWith, flatMapWith$default$2, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getCreationSite, getDependencies, getNarrowAncestors, getStorageLevel, glom, groupBy, groupBy, groupBy, groupBy$default$4, id, intersection, intersection, intersection, intersection$default$3, isCheckpointed, isTraceEnabled, iterator, keyBy, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logTrace, logTrace, logWarning, logWarning, map, mapPartitions, mapPartitions$default$2, mapPartitionsWithContext, mapPartitionsWithContext$default$2, mapPartitionsWithIndex, mapPartitionsWithIndex$default$2, mapPartitionsWithSplit, mapPartitionsWithSplit$default$2, mapWith, mapWith$default$2, markCheckpointed, max, min, name_$eq, name, org$apache$spark$Logging$$log__$eq, org$apache$spark$Logging$$log_, org$apache$spark$rdd$RDD$$collectPartition$1, org$apache$spark$rdd$RDD$$countPartition$1, org$apache$spark$rdd$RDD$$debugString$1, org$apache$spark$rdd$RDD$$dependencies__$eq, org$apache$spark$rdd$RDD$$dependencies_, org$apache$spark$rdd$RDD$$distributePartition$1, org$apache$spark$rdd$RDD$$mergeMaps$1, org$apache$spark$rdd$RDD$$partitions__$eq, org$apache$spark$rdd$RDD$$partitions_, org$apache$spark$rdd$RDD$$visit$1, partitioner, partitions, persist, persist, pipe, pipe, pipe, pipe$default$2, pipe$default$3, pipe$default$4, pipe$default$5, preferredLocations, randomSplit, randomSplit$default$2, reduce, repartition, repartition$default$2, sample, sample$default$3, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sparkContext, subtract, subtract, subtract, subtract$default$3, take, takeOrdered, takeSample, takeSample$default$3, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, union, unpersist, unpersist$default$1, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueIdpublic MongoEntityRDD(org.apache.spark.SparkContext sc,
IMongoDeepJobConfig<T> config)
sc - the spark context to which the RDD will be bound to.config - the deep configuration object.public static <T extends IDeepType> void saveEntity(org.apache.spark.api.java.JavaRDD<T> rdd, IMongoDeepJobConfig<T> config)
T - rdd - config - Copyright © 2014. All rights reserved.