public class CassandraCellRDD extends CassandraRDD<Cells>
Cells element.config| Constructor and Description |
|---|
CassandraCellRDD(org.apache.spark.SparkContext sc,
IDeepJobConfig<Cells> config)
This constructor should not be called explicitly.
Use DeepSparkContext instead to create an RDD. |
| Modifier and Type | Method and Description |
|---|---|
protected Cells |
transformElement(Pair<Map<String,ByteBuffer>,Map<String,ByteBuffer>> elem)
Transform a row coming from the Cassandra's API to an element of
type
|
compute, cql3SaveRDDToCassandra, getComputeCallback, getPartitions, getPreferredLocations, saveRDDToCassandra, saveRDDToCassandra$plus$plus, aggregate, cache, cartesian, checkpoint, checkpointData_$eq, checkpointData, clearDependencies, coalesce, coalesce$default$2, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApprox$default$2, countApproxDistinct, countApproxDistinct$default$1, countByValue, countByValueApprox, countByValueApprox$default$2, dependencies, distinct, distinct, doCheckpoint, elementClassTag, filter, filterWith, first, firstParent, flatMap, flatMapWith, flatMapWith$default$2, fold, foreach, foreachPartition, foreachWith, generator_$eq, generator, getCheckpointFile, getDependencies, getStorageLevel, glom, groupBy, groupBy, groupBy, id, isCheckpointed, isTraceEnabled, iterator, keyBy, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logTrace, logTrace, logWarning, logWarning, map, mapPartitions, mapPartitions$default$2, mapPartitionsWithContext, mapPartitionsWithContext$default$2, mapPartitionsWithIndex, mapPartitionsWithIndex$default$2, mapPartitionsWithSplit, mapPartitionsWithSplit$default$2, mapWith, mapWith$default$2, markCheckpointed, name_$eq, name, org$apache$spark$Logging$$log__$eq, org$apache$spark$Logging$$log_, org$apache$spark$rdd$RDD$$countPartition$1, org$apache$spark$rdd$RDD$$debugString$1, org$apache$spark$rdd$RDD$$dependencies__$eq, org$apache$spark$rdd$RDD$$dependencies_, org$apache$spark$rdd$RDD$$mergeMaps$1, org$apache$spark$rdd$RDD$$partitions__$eq, org$apache$spark$rdd$RDD$$partitions_, origin, partitioner, partitions, persist, persist, pipe, pipe, pipe, pipe$default$2, pipe$default$3, pipe$default$4, preferredLocations, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setGenerator, setName, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, top, toString, union, unpersist, unpersist$default$1, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionspublic CassandraCellRDD(org.apache.spark.SparkContext sc,
IDeepJobConfig<Cells> config)
DeepSparkContext instead to create an RDD.sc - config - protected Cells transformElement(Pair<Map<String,ByteBuffer>,Map<String,ByteBuffer>> elem)
transformElement in class CassandraRDD<Cells>elem - the element to transform.Copyright © 2014. All rights reserved.