Class TracDataApiGrpc.TracDataApiFutureStub

java.lang.Object
io.grpc.stub.AbstractStub<TracDataApiGrpc.TracDataApiFutureStub>
io.grpc.stub.AbstractFutureStub<TracDataApiGrpc.TracDataApiFutureStub>
org.finos.tracdap.api.TracDataApiGrpc.TracDataApiFutureStub
Enclosing class:
TracDataApiGrpc

public static final class TracDataApiGrpc.TracDataApiFutureStub extends io.grpc.stub.AbstractFutureStub<TracDataApiGrpc.TracDataApiFutureStub>
A stub to allow clients to do ListenableFuture-style rpc calls to service TracDataApi.

 Public API for creating, updating, reading and querying primary data stored in the TRAC platform.
 The TRAC data API provides a standard mechanism for client applications to store and access data
 in the TRAC platform. Calls are translated into the underlying storage mechanisms, using push-down
 operations where possible for efficient queries on large datasets.
 The data API includes format translation, so data can be uploaded and retrieved in any supported
 format. The back-end storage format is controlled by the platform. For example if a user uploads
 a CSV file TRAC will convert it to the default storage format (by default Arrow IPC file format).
 Later a web application might ask for that data in JSON format and TRAC would again perform the
 conversion. The platform uses Apache Arrow as an intermediate representation that other formats
 are converted from and to.
 The data API uses streaming operations to support transfer of large datasets and can be used for
 both user-facing client applications and system-to-system integration. For regular, high-volume
 data transfers there are other options for integration, including data import jobs and direct
 back-end integration of the underlying storage technologies. These options can be faster and
 reduce storage requirements, at the expense of tighter coupling between systems. A compromise is
 to use direct integration and import jobs for a small number of critical feeds containing a
 high volume of data, and use the data API for client access and for integration of secondary
 and/or low-volume systems.
 
  • Nested Class Summary

    Nested classes/interfaces inherited from class io.grpc.stub.AbstractStub

    io.grpc.stub.AbstractStub.StubFactory<T extends io.grpc.stub.AbstractStub<T>>
  • Method Summary

    Modifier and Type
    Method
    Description
    build(io.grpc.Channel channel, io.grpc.CallOptions callOptions)
     
    com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader>
    Create a new dataset, supplying the schema and content as a single blob This method creates a new dataset and a corresponding DATA object in the TRAC metadata store.
    com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader>
    Upload a new file into TRAC, sending the content as a single blob Calling this method will create a new FILE object in the metadata store.
    com.google.common.util.concurrent.ListenableFuture<DataReadResponse>
    Read an existing dataset, returning the content as a single blob This method reads the contents of an existing dataset and returns it in the requested format, along with a copy of the data schema.
    com.google.common.util.concurrent.ListenableFuture<FileReadResponse>
    Download a file that has been stored in TRAC and return it as a single blob The request uses a regular TagSelector to indicate which file to read.
    com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader>
    Update an existing dataset, supplying the schema and content as a single blob This method updates an existing dataset and the corresponding DATA object in the TRAC metadata store.
    com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader>
    Upload a new version of an existing file into TRAC, sending the content as a single blob Calling this method will update the relevant FILE object in the metadata store.

    Methods inherited from class io.grpc.stub.AbstractFutureStub

    newStub, newStub

    Methods inherited from class io.grpc.stub.AbstractStub

    getCallOptions, getChannel, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOnReadyThreshold, withOption, withWaitForReady

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Method Details

    • build

      protected TracDataApiGrpc.TracDataApiFutureStub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions)
      Specified by:
      build in class io.grpc.stub.AbstractStub<TracDataApiGrpc.TracDataApiFutureStub>
    • createSmallDataset

      public com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader> createSmallDataset(DataWriteRequest request)
      
       Create a new dataset, supplying the schema and content as a single blob
       This method creates a new dataset and a corresponding DATA object
       in the TRAC metadata store. Once a dataset is created it can be used as an
       input into a model run, it can also be read and queried using the data API.
       Data can be supplied in any format supported by the platform.
       The request must specify a schema for the dataset, incoming data will be
       verified against the schema. Schemas can be specified using either:
          * A full schema definition - if a full schema is supplied, it will be embedded
            with the dataset and used for this dataset only
          * A schema ID - a tag selector for an existing SCHEMA object, which may be
            shared by multiple datasets
       The "format" parameter describes the format used to upload data. For example,
       to upload a CSV file the format would be set to "text/csv" and the file content
       can be uploaded directly, or to upload the output of an editor grid in a web
       client the format can be set to "text/json" to upload a JSON representation of
       the editor contents. TRAC will apply format conversion before the data is
       processed and stored.
       Tag updates can be supplied to tag the newly created dataset, they behave exactly
       the same as tag updates in the createObject() call of TracMetadataApi.
       This is a unary call, all the request fields and metadata (including schema specifier)
       and dataset content encoded as per the "format" field are supplied in a single message.
       It is intended for working with small datasets and for use in environments where client
       streaming is not available (particularly in gRPC-Web clients).
       This method returns the header of the newly created DATA object. Error conditions
       include: Invalid request, unknown tenant, schema not found (if an external schema
       ID is used), format not supported, data does not match schema, corrupt or invalid
       data stream. Storage errors may also be reported if there is a problem communicating
       with the underlying storage technology. In the event of an error, TRAC will do its
       best to clean up any partially-written data in the storage layer.
       
    • updateSmallDataset

      public com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader> updateSmallDataset(DataWriteRequest request)
      
       Update an existing dataset, supplying the schema and content as a single blob
       This method updates an existing dataset and the corresponding DATA object
       in the TRAC metadata store. As per the TRAC immutability guarantee, the original
       version of the dataset is not altered. After an update, both the original version
       and the new version are available to use as inputs into a model runs and to read
       and query using the data API. Data can be supplied in any format supported by the
       platform.
       To update a dataset, the priorVersion field must indicate the dataset being updated.
       Only the latest version of a dataset can be updated.
       The request must specify a schema for the new version of the dataset, incoming data
       will be verified against the schema. The new schema must be compatible with the schema
       of the previous version. Schemas can be specified using either:
          * A full schema definition - Datasets created using an embedded schema must supply
            a full schema for all subsequent versions and each schema version must be compatible
            with the version before. Fields may be added, but not removed or altered.
          * A schema ID - Datasets created using an external schema must use the same external
            schema ID for all subsequent versions. It is permitted for later versions of a
            dataset to use later versions of the external schema, but not earlier versions.
       The "format" parameter describes the format used to upload data. For example,
       to upload a CSV file the format would be set to "text/csv" and the file content
       can be uploaded directly, or to upload the output of an editor grid in a web
       client the format can be set to "text/json" to upload a JSON representation of
       the editor contents. It is not necessary for different versions of the same dataset
       to be uploaded using the same format. TRAC will apply format conversion before the
       data is processed and stored.
       Tag updates can be supplied to tag the new version of the dataset, they behave exactly
       the same as tag updates in the updateObject() call of TracMetadataApi.
       This is a unary call, all the request fields and metadata (including schema specifier)
       and dataset content encoded as per the "format" field are supplied in a single message.
       It is intended for working with small datasets and for use in environments where client
       streaming is not available (particularly in gRPC-Web clients).
       This method returns the header of the version of the DATA object. Error conditions
       include: Invalid request, unknown tenant, schema not found (if an external schema
       ID is used), schema version not compatible, format not supported, data does not match
       schema, corrupt or invalid data stream. Storage errors may also be reported if there is
       a problem communicating with the underlying storage technology. In the event of an error,
       TRAC will do its best to clean up any partially-written data in the storage layer.
       
    • readSmallDataset

      public com.google.common.util.concurrent.ListenableFuture<DataReadResponse> readSmallDataset(DataReadRequest request)
      
       Read an existing dataset, returning the content as a single blob
       This method reads the contents of an existing dataset and returns it in the
       requested format, along with a copy of the data schema. Data can be requested
       in any format supported by the platform.
       The request uses a regular TagSelector to indicate which dataset and version to read.
       The format parameter is a mime type and must be a supported data format.
       This is a unary call, both the schema and the content of the dataset are returned
       in a single response message. The content of the dataset will be encoded in the
       requested format. Errors may occur if the content of the dataset is too large to
       fit in a single message frame.
       Error conditions include: Invalid request, unknown tenant, object not found, format
       not supported. Storage errors may also be reported if there is a problem communicating
       with the underlying storage technology.
       
    • createSmallFile

      public com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader> createSmallFile(FileWriteRequest request)
      
       Upload a new file into TRAC, sending the content as a single blob
       Calling this method will create a new FILE object in the metadata store.
       Tag updates can be supplied when creating a FILE, they will be passed on to the
       metadata service. The semantics for tag updates are identical to the createObject()
       method in TracMetadataApi.
       This is a unary method. The request must contain all the relevant fields and the
       entire content of the file in a single message. Errors may occur if the file is
       too large to fit in a single message frame.
       Clients may specify the size of the file being created. When a size is supplied, TRAC
       will check the size against the number of bytes stored. If the stored file size does not
       match the supplied value, the error will be reported with an error status of DATA_LOSS.
       When no size is supplied the check cannot be performed.
       The method returns the header of the newly created FILE object. Error conditions
       include: Invalid request, unknown tenant and validation failure, file too large and
       data loss (if the number of bytes stored does not match the number specified in the
       request). Storage errors may also be reported if there is a problem communicating with
       the underlying storage technology. In the event of an error, TRAC will do its best to
       clean up any partially-written data in the storage layer.
       
    • updateSmallFile

      public com.google.common.util.concurrent.ListenableFuture<org.finos.tracdap.metadata.TagHeader> updateSmallFile(FileWriteRequest request)
      
       Upload a new version of an existing file into TRAC, sending the content as a single blob
       Calling this method will update the relevant FILE object in the metadata store.
       The latest version of the FILE must be supplied in the priorVersion field
       of the request. For example if the latest version of a FILE object is version 2,
       the priorVersion field should refer to version 2 and TRAC will create version 3
       as a result of the update call. The metadata and content of prior versions
       remain unaltered. The file name may be changed between versions, but the extension
       and mime type must stay the same. Tag updates can be supplied when updating a FILE,
       they will be passed on to the metadata service> The semantics for tag updates are
       identical to the updateObject() method in TracMetadataApi.
       This is a unary call. The request must contain all the relevant fields and the
       entire content of the file in a single message. Errors may occur if the file is
       too large to fit in a single message frame.
       Clients may specify the size of the file being updated. When a size is supplied, TRAC
       will check the size against the number of bytes stored. If the stored file size does not
       match the supplied value, the error will be reported with an error status of DATA_LOSS.
       When no size is supplied the check cannot be performed.
       The call returns the header for the new version of the FILE object. Error conditions
       include: Invalid request, unknown tenant, validation failure, failed preconditions
       (e.g. extension and mime type changes) file too large and data loss (if the number of
       bytes stored does not match the number specified in the request). Storage errors may also
       be reported if there is a problem communicating with the underlying storage technology.
       In the event of an error, TRAC will do its best to clean up any partially-written data in
       the storage layer.
       
    • readSmallFile

      public com.google.common.util.concurrent.ListenableFuture<FileReadResponse> readSmallFile(FileReadRequest request)
      
       Download a file that has been stored in TRAC and return it as a single blob
       The request uses a regular TagSelector to indicate which file to read. The
       semantics of the request are identical to the readObject() method in
       TracMetadataApi.
       This is a unary method, the response will contain the file definition and the
       whole content of the file in a single message. Errors may occur if the file is
       too large to fit in a single message frame.
       Error conditions include: Invalid request, unknown tenant, unknown object ID,
       object type does not match ID, unknown object version, unknown tag version,
       file too large. Storage errors may also be reported if there is a problem
       communicating with the underlying storage technology.