CloudBigtableIO.SourceWithKeys (Apache Beam + Cloud Bigtable Connector 1.0.0-pre3 API)

com.google.cloud.bigtable.beam

Class CloudBigtableIO.SourceWithKeys

  • java.lang.Object
    • org.apache.beam.sdk.io.Source<T>
      • org.apache.beam.sdk.io.BoundedSource<Result>
        • com.google.cloud.bigtable.beam.CloudBigtableIO.SourceWithKeys
  • All Implemented Interfaces:
    Serializable, org.apache.beam.sdk.transforms.display.HasDisplayData
    Enclosing class:
    CloudBigtableIO


    protected static class CloudBigtableIO.SourceWithKeys
    extends org.apache.beam.sdk.io.BoundedSource<Result>
    A BoundedSource for a Cloud Bigtable Table with a start/stop key range, along with a potential filter via a Scan.
    See Also:
    Serialized Form
    • Nested Class Summary

      • Nested classes/interfaces inherited from class org.apache.beam.sdk.io.BoundedSource

        org.apache.beam.sdk.io.BoundedSource.BoundedReader<T>
      • Nested classes/interfaces inherited from class org.apache.beam.sdk.io.Source

        org.apache.beam.sdk.io.Source.Reader<T>
    • Field Detail

      • SOURCE_LOG

        protected static final org.slf4j.Logger SOURCE_LOG
      • SIZED_BASED_MAX_SPLIT_COUNT

        protected static final long SIZED_BASED_MAX_SPLIT_COUNT
        See Also:
        Constant Field Values
    • Method Detail

      • getEstimatedSizeBytes

        public long getEstimatedSizeBytes(org.apache.beam.sdk.options.PipelineOptions options)
        Gets an estimate of the size of the source.

        NOTE: This value is a guesstimate. It could be significantly off, especially if there is aScan selected in the configuration. It will also be off if the start and stop keys are calculated via CloudBigtableIO.Source.split(long, PipelineOptions).

        Parameters:
        options - The pipeline options.
        Returns:
        The estimated size of the source, in bytes.
      • getEstimatedSize

        public long getEstimatedSize()
      • split

        public List<? extends org.apache.beam.sdk.io.BoundedSource<Result>> split(long desiredBundleSizeBytes,
                                                                                  org.apache.beam.sdk.options.PipelineOptions options)
                                                                           throws Exception
        Splits the bundle based on the assumption that the data is distributed evenly between startKey and stopKey. That assumption may not be correct for any specific start/stop key combination.

        This method is called internally by Beam. Do not call it directly.

        Specified by:
        split in class org.apache.beam.sdk.io.BoundedSource<Result>
        Parameters:
        desiredBundleSizeBytes - The desired size for each bundle, in bytes.
        options - The pipeline options.
        Returns:
        A list of sources split into groups.
        Throws:
        Exception
      • getDefaultOutputCoder

        public org.apache.beam.sdk.coders.Coder<Result> getDefaultOutputCoder()
      • isWithinRange

        protected static boolean isWithinRange(byte[] scanStartKey,
                                               byte[] scanEndKey,
                                               byte[] startKey,
                                               byte[] endKey)
        Checks if the range of the region is within the range of the scan.
      • getSampleRowKeys

        public List<com.google.bigtable.repackaged.com.google.bigtable.v2.SampleRowKeysResponse> getSampleRowKeys()
                                                                                                           throws IOException
        Performs a call to get sample row keys from BigtableDataClient.sampleRowKeys(com.google.bigtable.repackaged.com.google.bigtable.v2.SampleRowKeysRequest) if they are not yet cached. The sample row keys give information about tablet key boundaries and estimated sizes.
        Throws:
        IOException
      • validate

        public void validate()
        Validates the existence of the table in the configuration.
        Specified by:
        validate in class org.apache.beam.sdk.io.Source<Result>
      • createReader

        public org.apache.beam.sdk.io.BoundedSource.BoundedReader<Result> createReader(org.apache.beam.sdk.options.PipelineOptions options)
        Creates a reader that will scan the entire table based on the Scan in the configuration.
        Specified by:
        createReader in class org.apache.beam.sdk.io.BoundedSource<Result>
        Returns:
        A reader for the table.
      • populateDisplayData

        public void populateDisplayData(org.apache.beam.sdk.transforms.display.DisplayData.Builder builder)
        Specified by:
        populateDisplayData in interface org.apache.beam.sdk.transforms.display.HasDisplayData
        Overrides:
        populateDisplayData in class org.apache.beam.sdk.io.Source<Result>


Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Cloud Bigtable Documentation