1. Packages
  2. Google Cloud Native
  3. API Docs
  4. datalabeling
  5. datalabeling/v1beta1
  6. EvaluationJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.datalabeling/v1beta1.EvaluationJob

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates an evaluation job. Auto-naming is currently not supported for this resource.

    Create EvaluationJob Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new EvaluationJob(name: string, args: EvaluationJobArgs, opts?: CustomResourceOptions);
    @overload
    def EvaluationJob(resource_name: str,
                      args: EvaluationJobArgs,
                      opts: Optional[ResourceOptions] = None)
    
    @overload
    def EvaluationJob(resource_name: str,
                      opts: Optional[ResourceOptions] = None,
                      annotation_spec_set: Optional[str] = None,
                      description: Optional[str] = None,
                      evaluation_job_config: Optional[GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs] = None,
                      label_missing_ground_truth: Optional[bool] = None,
                      model_version: Optional[str] = None,
                      schedule: Optional[str] = None,
                      project: Optional[str] = None)
    func NewEvaluationJob(ctx *Context, name string, args EvaluationJobArgs, opts ...ResourceOption) (*EvaluationJob, error)
    public EvaluationJob(string name, EvaluationJobArgs args, CustomResourceOptions? opts = null)
    public EvaluationJob(String name, EvaluationJobArgs args)
    public EvaluationJob(String name, EvaluationJobArgs args, CustomResourceOptions options)
    
    type: google-native:datalabeling/v1beta1:EvaluationJob
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args EvaluationJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args EvaluationJobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args EvaluationJobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args EvaluationJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args EvaluationJobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var evaluationJobResource = new GoogleNative.DataLabeling.V1Beta1.EvaluationJob("evaluationJobResource", new()
    {
        AnnotationSpecSet = "string",
        Description = "string",
        EvaluationJobConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs
        {
            BigqueryImportKeys = 
            {
                { "string", "string" },
            },
            EvaluationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs
            {
                BoundingBoxEvaluationOptions = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs
                {
                    IouThreshold = 0,
                },
            },
            ExampleCount = 0,
            ExampleSamplePercentage = 0,
            BoundingPolyConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs
            {
                AnnotationSpecSet = "string",
                InstructionMessage = "string",
            },
            EvaluationJobAlertConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs
            {
                Email = "string",
                MinAcceptableMeanAveragePrecision = 0,
            },
            HumanAnnotationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs
            {
                AnnotatedDatasetDisplayName = "string",
                Instruction = "string",
                AnnotatedDatasetDescription = "string",
                ContributorEmails = new[]
                {
                    "string",
                },
                LabelGroup = "string",
                LanguageCode = "string",
                QuestionDuration = "string",
                ReplicaCount = 0,
                UserEmailAddress = "string",
            },
            ImageClassificationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs
            {
                AnnotationSpecSet = "string",
                AllowMultiLabel = false,
                AnswerAggregationType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.StringAggregationTypeUnspecified,
            },
            InputConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1InputConfigArgs
            {
                DataType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DataTypeUnspecified,
                AnnotationType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.AnnotationTypeUnspecified,
                BigquerySource = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs
                {
                    InputUri = "string",
                },
                ClassificationMetadata = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs
                {
                    IsMultiLabel = false,
                },
                GcsSource = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1GcsSourceArgs
                {
                    InputUri = "string",
                    MimeType = "string",
                },
                TextMetadata = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextMetadataArgs
                {
                    LanguageCode = "string",
                },
            },
            TextClassificationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs
            {
                AnnotationSpecSet = "string",
                AllowMultiLabel = false,
                SentimentConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1SentimentConfigArgs
                {
                    EnableLabelSentimentSelection = false,
                },
            },
        },
        LabelMissingGroundTruth = false,
        ModelVersion = "string",
        Schedule = "string",
        Project = "string",
    });
    
    example, err := datalabeling.NewEvaluationJob(ctx, "evaluationJobResource", &datalabeling.EvaluationJobArgs{
    AnnotationSpecSet: pulumi.String("string"),
    Description: pulumi.String("string"),
    EvaluationJobConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs{
    BigqueryImportKeys: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    EvaluationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs{
    BoundingBoxEvaluationOptions: &datalabeling.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs{
    IouThreshold: pulumi.Float64(0),
    },
    },
    ExampleCount: pulumi.Int(0),
    ExampleSamplePercentage: pulumi.Float64(0),
    BoundingPolyConfig: &datalabeling.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs{
    AnnotationSpecSet: pulumi.String("string"),
    InstructionMessage: pulumi.String("string"),
    },
    EvaluationJobAlertConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs{
    Email: pulumi.String("string"),
    MinAcceptableMeanAveragePrecision: pulumi.Float64(0),
    },
    HumanAnnotationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs{
    AnnotatedDatasetDisplayName: pulumi.String("string"),
    Instruction: pulumi.String("string"),
    AnnotatedDatasetDescription: pulumi.String("string"),
    ContributorEmails: pulumi.StringArray{
    pulumi.String("string"),
    },
    LabelGroup: pulumi.String("string"),
    LanguageCode: pulumi.String("string"),
    QuestionDuration: pulumi.String("string"),
    ReplicaCount: pulumi.Int(0),
    UserEmailAddress: pulumi.String("string"),
    },
    ImageClassificationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs{
    AnnotationSpecSet: pulumi.String("string"),
    AllowMultiLabel: pulumi.Bool(false),
    AnswerAggregationType: datalabeling.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeStringAggregationTypeUnspecified,
    },
    InputConfig: &datalabeling.GoogleCloudDatalabelingV1beta1InputConfigArgs{
    DataType: datalabeling.GoogleCloudDatalabelingV1beta1InputConfigDataTypeDataTypeUnspecified,
    AnnotationType: datalabeling.GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeAnnotationTypeUnspecified,
    BigquerySource: &datalabeling.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs{
    InputUri: pulumi.String("string"),
    },
    ClassificationMetadata: &datalabeling.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs{
    IsMultiLabel: pulumi.Bool(false),
    },
    GcsSource: &datalabeling.GoogleCloudDatalabelingV1beta1GcsSourceArgs{
    InputUri: pulumi.String("string"),
    MimeType: pulumi.String("string"),
    },
    TextMetadata: &datalabeling.GoogleCloudDatalabelingV1beta1TextMetadataArgs{
    LanguageCode: pulumi.String("string"),
    },
    },
    TextClassificationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs{
    AnnotationSpecSet: pulumi.String("string"),
    AllowMultiLabel: pulumi.Bool(false),
    SentimentConfig: &datalabeling.GoogleCloudDatalabelingV1beta1SentimentConfigArgs{
    EnableLabelSentimentSelection: pulumi.Bool(false),
    },
    },
    },
    LabelMissingGroundTruth: pulumi.Bool(false),
    ModelVersion: pulumi.String("string"),
    Schedule: pulumi.String("string"),
    Project: pulumi.String("string"),
    })
    
    var evaluationJobResource = new EvaluationJob("evaluationJobResource", EvaluationJobArgs.builder()
        .annotationSpecSet("string")
        .description("string")
        .evaluationJobConfig(GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs.builder()
            .bigqueryImportKeys(Map.of("string", "string"))
            .evaluationConfig(GoogleCloudDatalabelingV1beta1EvaluationConfigArgs.builder()
                .boundingBoxEvaluationOptions(GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs.builder()
                    .iouThreshold(0)
                    .build())
                .build())
            .exampleCount(0)
            .exampleSamplePercentage(0)
            .boundingPolyConfig(GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs.builder()
                .annotationSpecSet("string")
                .instructionMessage("string")
                .build())
            .evaluationJobAlertConfig(GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs.builder()
                .email("string")
                .minAcceptableMeanAveragePrecision(0)
                .build())
            .humanAnnotationConfig(GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs.builder()
                .annotatedDatasetDisplayName("string")
                .instruction("string")
                .annotatedDatasetDescription("string")
                .contributorEmails("string")
                .labelGroup("string")
                .languageCode("string")
                .questionDuration("string")
                .replicaCount(0)
                .userEmailAddress("string")
                .build())
            .imageClassificationConfig(GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs.builder()
                .annotationSpecSet("string")
                .allowMultiLabel(false)
                .answerAggregationType("STRING_AGGREGATION_TYPE_UNSPECIFIED")
                .build())
            .inputConfig(GoogleCloudDatalabelingV1beta1InputConfigArgs.builder()
                .dataType("DATA_TYPE_UNSPECIFIED")
                .annotationType("ANNOTATION_TYPE_UNSPECIFIED")
                .bigquerySource(GoogleCloudDatalabelingV1beta1BigQuerySourceArgs.builder()
                    .inputUri("string")
                    .build())
                .classificationMetadata(GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs.builder()
                    .isMultiLabel(false)
                    .build())
                .gcsSource(GoogleCloudDatalabelingV1beta1GcsSourceArgs.builder()
                    .inputUri("string")
                    .mimeType("string")
                    .build())
                .textMetadata(GoogleCloudDatalabelingV1beta1TextMetadataArgs.builder()
                    .languageCode("string")
                    .build())
                .build())
            .textClassificationConfig(GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs.builder()
                .annotationSpecSet("string")
                .allowMultiLabel(false)
                .sentimentConfig(GoogleCloudDatalabelingV1beta1SentimentConfigArgs.builder()
                    .enableLabelSentimentSelection(false)
                    .build())
                .build())
            .build())
        .labelMissingGroundTruth(false)
        .modelVersion("string")
        .schedule("string")
        .project("string")
        .build());
    
    evaluation_job_resource = google_native.datalabeling.v1beta1.EvaluationJob("evaluationJobResource",
        annotation_spec_set="string",
        description="string",
        evaluation_job_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs(
            bigquery_import_keys={
                "string": "string",
            },
            evaluation_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs(
                bounding_box_evaluation_options=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs(
                    iou_threshold=0,
                ),
            ),
            example_count=0,
            example_sample_percentage=0,
            bounding_poly_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs(
                annotation_spec_set="string",
                instruction_message="string",
            ),
            evaluation_job_alert_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs(
                email="string",
                min_acceptable_mean_average_precision=0,
            ),
            human_annotation_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs(
                annotated_dataset_display_name="string",
                instruction="string",
                annotated_dataset_description="string",
                contributor_emails=["string"],
                label_group="string",
                language_code="string",
                question_duration="string",
                replica_count=0,
                user_email_address="string",
            ),
            image_classification_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs(
                annotation_spec_set="string",
                allow_multi_label=False,
                answer_aggregation_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.STRING_AGGREGATION_TYPE_UNSPECIFIED,
            ),
            input_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigArgs(
                data_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DATA_TYPE_UNSPECIFIED,
                annotation_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.ANNOTATION_TYPE_UNSPECIFIED,
                bigquery_source=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs(
                    input_uri="string",
                ),
                classification_metadata=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs(
                    is_multi_label=False,
                ),
                gcs_source=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1GcsSourceArgs(
                    input_uri="string",
                    mime_type="string",
                ),
                text_metadata=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1TextMetadataArgs(
                    language_code="string",
                ),
            ),
            text_classification_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs(
                annotation_spec_set="string",
                allow_multi_label=False,
                sentiment_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1SentimentConfigArgs(
                    enable_label_sentiment_selection=False,
                ),
            ),
        ),
        label_missing_ground_truth=False,
        model_version="string",
        schedule="string",
        project="string")
    
    const evaluationJobResource = new google_native.datalabeling.v1beta1.EvaluationJob("evaluationJobResource", {
        annotationSpecSet: "string",
        description: "string",
        evaluationJobConfig: {
            bigqueryImportKeys: {
                string: "string",
            },
            evaluationConfig: {
                boundingBoxEvaluationOptions: {
                    iouThreshold: 0,
                },
            },
            exampleCount: 0,
            exampleSamplePercentage: 0,
            boundingPolyConfig: {
                annotationSpecSet: "string",
                instructionMessage: "string",
            },
            evaluationJobAlertConfig: {
                email: "string",
                minAcceptableMeanAveragePrecision: 0,
            },
            humanAnnotationConfig: {
                annotatedDatasetDisplayName: "string",
                instruction: "string",
                annotatedDatasetDescription: "string",
                contributorEmails: ["string"],
                labelGroup: "string",
                languageCode: "string",
                questionDuration: "string",
                replicaCount: 0,
                userEmailAddress: "string",
            },
            imageClassificationConfig: {
                annotationSpecSet: "string",
                allowMultiLabel: false,
                answerAggregationType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.StringAggregationTypeUnspecified,
            },
            inputConfig: {
                dataType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DataTypeUnspecified,
                annotationType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.AnnotationTypeUnspecified,
                bigquerySource: {
                    inputUri: "string",
                },
                classificationMetadata: {
                    isMultiLabel: false,
                },
                gcsSource: {
                    inputUri: "string",
                    mimeType: "string",
                },
                textMetadata: {
                    languageCode: "string",
                },
            },
            textClassificationConfig: {
                annotationSpecSet: "string",
                allowMultiLabel: false,
                sentimentConfig: {
                    enableLabelSentimentSelection: false,
                },
            },
        },
        labelMissingGroundTruth: false,
        modelVersion: "string",
        schedule: "string",
        project: "string",
    });
    
    type: google-native:datalabeling/v1beta1:EvaluationJob
    properties:
        annotationSpecSet: string
        description: string
        evaluationJobConfig:
            bigqueryImportKeys:
                string: string
            boundingPolyConfig:
                annotationSpecSet: string
                instructionMessage: string
            evaluationConfig:
                boundingBoxEvaluationOptions:
                    iouThreshold: 0
            evaluationJobAlertConfig:
                email: string
                minAcceptableMeanAveragePrecision: 0
            exampleCount: 0
            exampleSamplePercentage: 0
            humanAnnotationConfig:
                annotatedDatasetDescription: string
                annotatedDatasetDisplayName: string
                contributorEmails:
                    - string
                instruction: string
                labelGroup: string
                languageCode: string
                questionDuration: string
                replicaCount: 0
                userEmailAddress: string
            imageClassificationConfig:
                allowMultiLabel: false
                annotationSpecSet: string
                answerAggregationType: STRING_AGGREGATION_TYPE_UNSPECIFIED
            inputConfig:
                annotationType: ANNOTATION_TYPE_UNSPECIFIED
                bigquerySource:
                    inputUri: string
                classificationMetadata:
                    isMultiLabel: false
                dataType: DATA_TYPE_UNSPECIFIED
                gcsSource:
                    inputUri: string
                    mimeType: string
                textMetadata:
                    languageCode: string
            textClassificationConfig:
                allowMultiLabel: false
                annotationSpecSet: string
                sentimentConfig:
                    enableLabelSentimentSelection: false
        labelMissingGroundTruth: false
        modelVersion: string
        project: string
        schedule: string
    

    EvaluationJob Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The EvaluationJob resource accepts the following input properties:

    AnnotationSpecSet string
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    Description string
    Description of the job. The description can be up to 25,000 characters long.
    EvaluationJobConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobConfig
    Configuration details for the evaluation job.
    LabelMissingGroundTruth bool
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    ModelVersion string
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    Schedule string
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    Project string
    AnnotationSpecSet string
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    Description string
    Description of the job. The description can be up to 25,000 characters long.
    EvaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs
    Configuration details for the evaluation job.
    LabelMissingGroundTruth bool
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    ModelVersion string
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    Schedule string
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    Project string
    annotationSpecSet String
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    description String
    Description of the job. The description can be up to 25,000 characters long.
    evaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfig
    Configuration details for the evaluation job.
    labelMissingGroundTruth Boolean
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    modelVersion String
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    schedule String
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    project String
    annotationSpecSet string
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    description string
    Description of the job. The description can be up to 25,000 characters long.
    evaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfig
    Configuration details for the evaluation job.
    labelMissingGroundTruth boolean
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    modelVersion string
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    schedule string
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    project string
    annotation_spec_set str
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    description str
    Description of the job. The description can be up to 25,000 characters long.
    evaluation_job_config GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs
    Configuration details for the evaluation job.
    label_missing_ground_truth bool
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    model_version str
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    schedule str
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    project str
    annotationSpecSet String
    Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
    description String
    Description of the job. The description can be up to 25,000 characters long.
    evaluationJobConfig Property Map
    Configuration details for the evaluation job.
    labelMissingGroundTruth Boolean
    Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
    modelVersion String
    The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
    schedule String
    Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    project String

    Outputs

    All input properties are implicitly available as output properties. Additionally, the EvaluationJob resource produces the following output properties:

    Attempts List<Pulumi.GoogleNative.DataLabeling.V1Beta1.Outputs.GoogleCloudDatalabelingV1beta1AttemptResponse>
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    CreateTime string
    Timestamp of when this evaluation job was created.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    State string
    Describes the current state of the job.
    Attempts []GoogleCloudDatalabelingV1beta1AttemptResponse
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    CreateTime string
    Timestamp of when this evaluation job was created.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    State string
    Describes the current state of the job.
    attempts List<GoogleCloudDatalabelingV1beta1AttemptResponse>
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    createTime String
    Timestamp of when this evaluation job was created.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    state String
    Describes the current state of the job.
    attempts GoogleCloudDatalabelingV1beta1AttemptResponse[]
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    createTime string
    Timestamp of when this evaluation job was created.
    id string
    The provider-assigned unique ID for this managed resource.
    name string
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    state string
    Describes the current state of the job.
    attempts Sequence[GoogleCloudDatalabelingV1beta1AttemptResponse]
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    create_time str
    Timestamp of when this evaluation job was created.
    id str
    The provider-assigned unique ID for this managed resource.
    name str
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    state str
    Describes the current state of the job.
    attempts List<Property Map>
    Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    createTime String
    Timestamp of when this evaluation job was created.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
    state String
    Describes the current state of the job.

    Supporting Types

    GoogleCloudDatalabelingV1beta1AttemptResponse, GoogleCloudDatalabelingV1beta1AttemptResponseArgs

    AttemptTime string
    PartialFailures []GoogleRpcStatusResponse
    Details of errors that occurred.
    attemptTime String
    partialFailures List<GoogleRpcStatusResponse>
    Details of errors that occurred.
    attemptTime string
    partialFailures GoogleRpcStatusResponse[]
    Details of errors that occurred.
    attemptTime String
    partialFailures List<Property Map>
    Details of errors that occurred.

    GoogleCloudDatalabelingV1beta1BigQuerySource, GoogleCloudDatalabelingV1beta1BigQuerySourceArgs

    InputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    InputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri String
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    input_uri str
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri String
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

    GoogleCloudDatalabelingV1beta1BigQuerySourceResponse, GoogleCloudDatalabelingV1beta1BigQuerySourceResponseArgs

    InputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    InputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri String
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri string
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    input_uri str
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
    inputUri String
    BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

    GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions, GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs

    IouThreshold double
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    IouThreshold float64
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold Double
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold number
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iou_threshold float
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold Number
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

    GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse, GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponseArgs

    IouThreshold double
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    IouThreshold float64
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold Double
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold number
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iou_threshold float
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
    iouThreshold Number
    Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

    GoogleCloudDatalabelingV1beta1BoundingPolyConfig, GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs

    AnnotationSpecSet string
    Annotation spec set resource name.
    InstructionMessage string
    Optional. Instruction message showed on contributors UI.
    AnnotationSpecSet string
    Annotation spec set resource name.
    InstructionMessage string
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet String
    Annotation spec set resource name.
    instructionMessage String
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet string
    Annotation spec set resource name.
    instructionMessage string
    Optional. Instruction message showed on contributors UI.
    annotation_spec_set str
    Annotation spec set resource name.
    instruction_message str
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet String
    Annotation spec set resource name.
    instructionMessage String
    Optional. Instruction message showed on contributors UI.

    GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse, GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponseArgs

    AnnotationSpecSet string
    Annotation spec set resource name.
    InstructionMessage string
    Optional. Instruction message showed on contributors UI.
    AnnotationSpecSet string
    Annotation spec set resource name.
    InstructionMessage string
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet String
    Annotation spec set resource name.
    instructionMessage String
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet string
    Annotation spec set resource name.
    instructionMessage string
    Optional. Instruction message showed on contributors UI.
    annotation_spec_set str
    Annotation spec set resource name.
    instruction_message str
    Optional. Instruction message showed on contributors UI.
    annotationSpecSet String
    Annotation spec set resource name.
    instructionMessage String
    Optional. Instruction message showed on contributors UI.

    GoogleCloudDatalabelingV1beta1ClassificationMetadata, GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs

    IsMultiLabel bool
    Whether the classification task is multi-label or not.
    IsMultiLabel bool
    Whether the classification task is multi-label or not.
    isMultiLabel Boolean
    Whether the classification task is multi-label or not.
    isMultiLabel boolean
    Whether the classification task is multi-label or not.
    is_multi_label bool
    Whether the classification task is multi-label or not.
    isMultiLabel Boolean
    Whether the classification task is multi-label or not.

    GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse, GoogleCloudDatalabelingV1beta1ClassificationMetadataResponseArgs

    IsMultiLabel bool
    Whether the classification task is multi-label or not.
    IsMultiLabel bool
    Whether the classification task is multi-label or not.
    isMultiLabel Boolean
    Whether the classification task is multi-label or not.
    isMultiLabel boolean
    Whether the classification task is multi-label or not.
    is_multi_label bool
    Whether the classification task is multi-label or not.
    isMultiLabel Boolean
    Whether the classification task is multi-label or not.

    GoogleCloudDatalabelingV1beta1EvaluationConfig, GoogleCloudDatalabelingV1beta1EvaluationConfigArgs

    BoundingBoxEvaluationOptions Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    BoundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    bounding_box_evaluation_options GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions Property Map
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

    GoogleCloudDatalabelingV1beta1EvaluationConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationConfigResponseArgs

    BoundingBoxEvaluationOptions Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    BoundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    bounding_box_evaluation_options GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.
    boundingBoxEvaluationOptions Property Map
    Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

    GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig, GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs

    Email string
    An email address to send alerts to.
    MinAcceptableMeanAveragePrecision double
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    Email string
    An email address to send alerts to.
    MinAcceptableMeanAveragePrecision float64
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email String
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision Double
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email string
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision number
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email str
    An email address to send alerts to.
    min_acceptable_mean_average_precision float
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email String
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision Number
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

    GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponseArgs

    Email string
    An email address to send alerts to.
    MinAcceptableMeanAveragePrecision double
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    Email string
    An email address to send alerts to.
    MinAcceptableMeanAveragePrecision float64
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email String
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision Double
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email string
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision number
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email str
    An email address to send alerts to.
    min_acceptable_mean_average_precision float
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    email String
    An email address to send alerts to.
    minAcceptableMeanAveragePrecision Number
    A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

    GoogleCloudDatalabelingV1beta1EvaluationJobConfig, GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs

    BigqueryImportKeys Dictionary<string, string>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    EvaluationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationConfig
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    ExampleCount int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    ExampleSamplePercentage double
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    BoundingPolyConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingPolyConfig
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    EvaluationJobAlertConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    HumanAnnotationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    ImageClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ImageClassificationConfig
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    InputConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1InputConfig
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    TextClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextClassificationConfig
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    BigqueryImportKeys map[string]string
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    EvaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfig
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    ExampleCount int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    ExampleSamplePercentage float64
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    BoundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfig
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    EvaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    HumanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    ImageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfig
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    InputConfig GoogleCloudDatalabelingV1beta1InputConfig
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    TextClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfig
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys Map<String,String>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfig
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    exampleCount Integer
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage Double
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfig
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfig
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig GoogleCloudDatalabelingV1beta1InputConfig
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfig
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys {[key: string]: string}
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfig
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    exampleCount number
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage number
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfig
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfig
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig GoogleCloudDatalabelingV1beta1InputConfig
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfig
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigquery_import_keys Mapping[str, str]
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    evaluation_config GoogleCloudDatalabelingV1beta1EvaluationConfig
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    example_count int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    example_sample_percentage float
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    bounding_poly_config GoogleCloudDatalabelingV1beta1BoundingPolyConfig
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluation_job_alert_config GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    human_annotation_config GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    image_classification_config GoogleCloudDatalabelingV1beta1ImageClassificationConfig
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    input_config GoogleCloudDatalabelingV1beta1InputConfig
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    text_classification_config GoogleCloudDatalabelingV1beta1TextClassificationConfig
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys Map<String>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    evaluationConfig Property Map
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    exampleCount Number
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage Number
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    boundingPolyConfig Property Map
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationJobAlertConfig Property Map
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    humanAnnotationConfig Property Map
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig Property Map
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig Property Map
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig Property Map
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

    GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponseArgs

    BigqueryImportKeys Dictionary<string, string>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    BoundingPolyConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    EvaluationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    EvaluationJobAlertConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    ExampleCount int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    ExampleSamplePercentage double
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    HumanAnnotationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    ImageClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    InputConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1InputConfigResponse
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    TextClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    BigqueryImportKeys map[string]string
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    BoundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    EvaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    EvaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    ExampleCount int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    ExampleSamplePercentage float64
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    HumanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    ImageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    InputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    TextClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys Map<String,String>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    exampleCount Integer
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage Double
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys {[key: string]: string}
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    exampleCount number
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage number
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigquery_import_keys Mapping[str, str]
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    bounding_poly_config GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluation_config GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    evaluation_job_alert_config GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    example_count int
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    example_sample_percentage float
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    human_annotation_config GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    image_classification_config GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    input_config GoogleCloudDatalabelingV1beta1InputConfigResponse
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    text_classification_config GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    bigqueryImportKeys Map<String>
    Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.
    boundingPolyConfig Property Map
    Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.
    evaluationConfig Property Map
    Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.
    evaluationJobAlertConfig Property Map
    Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
    exampleCount Number
    The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.
    exampleSamplePercentage Number
    Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    humanAnnotationConfig Property Map
    Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.
    imageClassificationConfig Property Map
    Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.
    inputConfig Property Map
    Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).
    textClassificationConfig Property Map
    Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

    GoogleCloudDatalabelingV1beta1GcsSource, GoogleCloudDatalabelingV1beta1GcsSourceArgs

    InputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    MimeType string
    The format of the source file. Only "text/csv" is supported.
    InputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    MimeType string
    The format of the source file. Only "text/csv" is supported.
    inputUri String
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType String
    The format of the source file. Only "text/csv" is supported.
    inputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType string
    The format of the source file. Only "text/csv" is supported.
    input_uri str
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mime_type str
    The format of the source file. Only "text/csv" is supported.
    inputUri String
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType String
    The format of the source file. Only "text/csv" is supported.

    GoogleCloudDatalabelingV1beta1GcsSourceResponse, GoogleCloudDatalabelingV1beta1GcsSourceResponseArgs

    InputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    MimeType string
    The format of the source file. Only "text/csv" is supported.
    InputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    MimeType string
    The format of the source file. Only "text/csv" is supported.
    inputUri String
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType String
    The format of the source file. Only "text/csv" is supported.
    inputUri string
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType string
    The format of the source file. Only "text/csv" is supported.
    input_uri str
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mime_type str
    The format of the source file. Only "text/csv" is supported.
    inputUri String
    The input URI of source file. This must be a Cloud Storage path (gs://...).
    mimeType String
    The format of the source file. Only "text/csv" is supported.

    GoogleCloudDatalabelingV1beta1HumanAnnotationConfig, GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs

    AnnotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    Instruction string
    Instruction resource name.
    AnnotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    ContributorEmails List<string>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    LabelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    LanguageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    QuestionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    ReplicaCount int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    UserEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    AnnotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    Instruction string
    Instruction resource name.
    AnnotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    ContributorEmails []string
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    LabelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    LanguageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    QuestionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    ReplicaCount int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    UserEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDisplayName String
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    instruction String
    Instruction resource name.
    annotatedDatasetDescription String
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    contributorEmails List<String>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    labelGroup String
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode String
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration String
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount Integer
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress String
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    instruction string
    Instruction resource name.
    annotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    contributorEmails string[]
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    labelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount number
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotated_dataset_display_name str
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    instruction str
    Instruction resource name.
    annotated_dataset_description str
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    contributor_emails Sequence[str]
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    label_group str
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    language_code str
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    question_duration str
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replica_count int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    user_email_address str
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDisplayName String
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    instruction String
    Instruction resource name.
    annotatedDatasetDescription String
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    contributorEmails List<String>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    labelGroup String
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode String
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration String
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount Number
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress String
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

    GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse, GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponseArgs

    AnnotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    AnnotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    ContributorEmails List<string>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    Instruction string
    Instruction resource name.
    LabelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    LanguageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    QuestionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    ReplicaCount int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    UserEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    AnnotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    AnnotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    ContributorEmails []string
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    Instruction string
    Instruction resource name.
    LabelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    LanguageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    QuestionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    ReplicaCount int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    UserEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDescription String
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    annotatedDatasetDisplayName String
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    contributorEmails List<String>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    instruction String
    Instruction resource name.
    labelGroup String
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode String
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration String
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount Integer
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress String
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDescription string
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    annotatedDatasetDisplayName string
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    contributorEmails string[]
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    instruction string
    Instruction resource name.
    labelGroup string
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode string
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration string
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount number
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress string
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotated_dataset_description str
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    annotated_dataset_display_name str
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    contributor_emails Sequence[str]
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    instruction str
    Instruction resource name.
    label_group str
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    language_code str
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    question_duration str
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replica_count int
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    user_email_address str
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    annotatedDatasetDescription String
    Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
    annotatedDatasetDisplayName String
    A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
    contributorEmails List<String>
    Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
    instruction String
    Instruction resource name.
    labelGroup String
    Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.
    languageCode String
    Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
    questionDuration String
    Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
    replicaCount Number
    Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
    userEmailAddress String
    Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

    GoogleCloudDatalabelingV1beta1ImageClassificationConfig, GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs

    AnnotationSpecSet string
    Annotation spec set resource name.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    AnswerAggregationType Pulumi.GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
    Optional. The type of how to aggregate answers.
    AnnotationSpecSet string
    Annotation spec set resource name.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    AnswerAggregationType GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
    Optional. The type of how to aggregate answers.
    annotationSpecSet String
    Annotation spec set resource name.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    answerAggregationType GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
    Optional. The type of how to aggregate answers.
    annotationSpecSet string
    Annotation spec set resource name.
    allowMultiLabel boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    answerAggregationType GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
    Optional. The type of how to aggregate answers.
    annotation_spec_set str
    Annotation spec set resource name.
    allow_multi_label bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    answer_aggregation_type GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
    Optional. The type of how to aggregate answers.
    annotationSpecSet String
    Annotation spec set resource name.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    answerAggregationType "STRING_AGGREGATION_TYPE_UNSPECIFIED" | "MAJORITY_VOTE" | "UNANIMOUS_VOTE" | "NO_AGGREGATION"
    Optional. The type of how to aggregate answers.

    GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType, GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeArgs

    StringAggregationTypeUnspecified
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    MajorityVote
    MAJORITY_VOTEMajority vote to aggregate answers.
    UnanimousVote
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    NoAggregation
    NO_AGGREGATIONPreserve all answers by crowd compute.
    GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeStringAggregationTypeUnspecified
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeMajorityVote
    MAJORITY_VOTEMajority vote to aggregate answers.
    GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeUnanimousVote
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeNoAggregation
    NO_AGGREGATIONPreserve all answers by crowd compute.
    StringAggregationTypeUnspecified
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    MajorityVote
    MAJORITY_VOTEMajority vote to aggregate answers.
    UnanimousVote
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    NoAggregation
    NO_AGGREGATIONPreserve all answers by crowd compute.
    StringAggregationTypeUnspecified
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    MajorityVote
    MAJORITY_VOTEMajority vote to aggregate answers.
    UnanimousVote
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    NoAggregation
    NO_AGGREGATIONPreserve all answers by crowd compute.
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    MAJORITY_VOTE
    MAJORITY_VOTEMajority vote to aggregate answers.
    UNANIMOUS_VOTE
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    NO_AGGREGATION
    NO_AGGREGATIONPreserve all answers by crowd compute.
    "STRING_AGGREGATION_TYPE_UNSPECIFIED"
    STRING_AGGREGATION_TYPE_UNSPECIFIED
    "MAJORITY_VOTE"
    MAJORITY_VOTEMajority vote to aggregate answers.
    "UNANIMOUS_VOTE"
    UNANIMOUS_VOTEUnanimous answers will be adopted.
    "NO_AGGREGATION"
    NO_AGGREGATIONPreserve all answers by crowd compute.

    GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse, GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponseArgs

    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    AnnotationSpecSet string
    Annotation spec set resource name.
    AnswerAggregationType string
    Optional. The type of how to aggregate answers.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    AnnotationSpecSet string
    Annotation spec set resource name.
    AnswerAggregationType string
    Optional. The type of how to aggregate answers.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    annotationSpecSet String
    Annotation spec set resource name.
    answerAggregationType String
    Optional. The type of how to aggregate answers.
    allowMultiLabel boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    annotationSpecSet string
    Annotation spec set resource name.
    answerAggregationType string
    Optional. The type of how to aggregate answers.
    allow_multi_label bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    annotation_spec_set str
    Annotation spec set resource name.
    answer_aggregation_type str
    Optional. The type of how to aggregate answers.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
    annotationSpecSet String
    Annotation spec set resource name.
    answerAggregationType String
    Optional. The type of how to aggregate answers.

    GoogleCloudDatalabelingV1beta1InputConfig, GoogleCloudDatalabelingV1beta1InputConfigArgs

    DataType Pulumi.GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType
    Data type must be specifed when user tries to import data.
    AnnotationType Pulumi.GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    BigquerySource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BigQuerySource
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    ClassificationMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ClassificationMetadata
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    GcsSource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1GcsSource
    Source located in Cloud Storage.
    TextMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextMetadata
    Required for text import, as language code must be specified.
    DataType GoogleCloudDatalabelingV1beta1InputConfigDataType
    Data type must be specifed when user tries to import data.
    AnnotationType GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    BigquerySource GoogleCloudDatalabelingV1beta1BigQuerySource
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    ClassificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadata
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    GcsSource GoogleCloudDatalabelingV1beta1GcsSource
    Source located in Cloud Storage.
    TextMetadata GoogleCloudDatalabelingV1beta1TextMetadata
    Required for text import, as language code must be specified.
    dataType GoogleCloudDatalabelingV1beta1InputConfigDataType
    Data type must be specifed when user tries to import data.
    annotationType GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySource
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadata
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    gcsSource GoogleCloudDatalabelingV1beta1GcsSource
    Source located in Cloud Storage.
    textMetadata GoogleCloudDatalabelingV1beta1TextMetadata
    Required for text import, as language code must be specified.
    dataType GoogleCloudDatalabelingV1beta1InputConfigDataType
    Data type must be specifed when user tries to import data.
    annotationType GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySource
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadata
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    gcsSource GoogleCloudDatalabelingV1beta1GcsSource
    Source located in Cloud Storage.
    textMetadata GoogleCloudDatalabelingV1beta1TextMetadata
    Required for text import, as language code must be specified.
    data_type GoogleCloudDatalabelingV1beta1InputConfigDataType
    Data type must be specifed when user tries to import data.
    annotation_type GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquery_source GoogleCloudDatalabelingV1beta1BigQuerySource
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classification_metadata GoogleCloudDatalabelingV1beta1ClassificationMetadata
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    gcs_source GoogleCloudDatalabelingV1beta1GcsSource
    Source located in Cloud Storage.
    text_metadata GoogleCloudDatalabelingV1beta1TextMetadata
    Required for text import, as language code must be specified.
    dataType "DATA_TYPE_UNSPECIFIED" | "IMAGE" | "VIDEO" | "TEXT" | "GENERAL_DATA"
    Data type must be specifed when user tries to import data.
    annotationType "ANNOTATION_TYPE_UNSPECIFIED" | "IMAGE_CLASSIFICATION_ANNOTATION" | "IMAGE_BOUNDING_BOX_ANNOTATION" | "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION" | "IMAGE_BOUNDING_POLY_ANNOTATION" | "IMAGE_POLYLINE_ANNOTATION" | "IMAGE_SEGMENTATION_ANNOTATION" | "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION" | "VIDEO_OBJECT_TRACKING_ANNOTATION" | "VIDEO_OBJECT_DETECTION_ANNOTATION" | "VIDEO_EVENT_ANNOTATION" | "TEXT_CLASSIFICATION_ANNOTATION" | "TEXT_ENTITY_EXTRACTION_ANNOTATION" | "GENERAL_CLASSIFICATION_ANNOTATION"
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource Property Map
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata Property Map
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    gcsSource Property Map
    Source located in Cloud Storage.
    textMetadata Property Map
    Required for text import, as language code must be specified.

    GoogleCloudDatalabelingV1beta1InputConfigAnnotationType, GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeArgs

    AnnotationTypeUnspecified
    ANNOTATION_TYPE_UNSPECIFIED
    ImageClassificationAnnotation
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    ImageBoundingBoxAnnotation
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    ImageOrientedBoundingBoxAnnotation
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    ImageBoundingPolyAnnotation
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    ImagePolylineAnnotation
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    ImageSegmentationAnnotation
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    VideoShotsClassificationAnnotation
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    VideoObjectTrackingAnnotation
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    VideoObjectDetectionAnnotation
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    VideoEventAnnotation
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    TextClassificationAnnotation
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    TextEntityExtractionAnnotation
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    GeneralClassificationAnnotation
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeAnnotationTypeUnspecified
    ANNOTATION_TYPE_UNSPECIFIED
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImageClassificationAnnotation
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImageBoundingBoxAnnotation
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImageOrientedBoundingBoxAnnotation
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImageBoundingPolyAnnotation
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImagePolylineAnnotation
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeImageSegmentationAnnotation
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeVideoShotsClassificationAnnotation
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeVideoObjectTrackingAnnotation
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeVideoObjectDetectionAnnotation
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeVideoEventAnnotation
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeTextClassificationAnnotation
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeTextEntityExtractionAnnotation
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeGeneralClassificationAnnotation
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
    AnnotationTypeUnspecified
    ANNOTATION_TYPE_UNSPECIFIED
    ImageClassificationAnnotation
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    ImageBoundingBoxAnnotation
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    ImageOrientedBoundingBoxAnnotation
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    ImageBoundingPolyAnnotation
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    ImagePolylineAnnotation
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    ImageSegmentationAnnotation
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    VideoShotsClassificationAnnotation
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    VideoObjectTrackingAnnotation
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    VideoObjectDetectionAnnotation
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    VideoEventAnnotation
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    TextClassificationAnnotation
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    TextEntityExtractionAnnotation
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    GeneralClassificationAnnotation
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
    AnnotationTypeUnspecified
    ANNOTATION_TYPE_UNSPECIFIED
    ImageClassificationAnnotation
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    ImageBoundingBoxAnnotation
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    ImageOrientedBoundingBoxAnnotation
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    ImageBoundingPolyAnnotation
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    ImagePolylineAnnotation
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    ImageSegmentationAnnotation
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    VideoShotsClassificationAnnotation
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    VideoObjectTrackingAnnotation
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    VideoObjectDetectionAnnotation
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    VideoEventAnnotation
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    TextClassificationAnnotation
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    TextEntityExtractionAnnotation
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    GeneralClassificationAnnotation
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
    ANNOTATION_TYPE_UNSPECIFIED
    ANNOTATION_TYPE_UNSPECIFIED
    IMAGE_CLASSIFICATION_ANNOTATION
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    IMAGE_BOUNDING_BOX_ANNOTATION
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    IMAGE_BOUNDING_POLY_ANNOTATION
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    IMAGE_POLYLINE_ANNOTATION
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    IMAGE_SEGMENTATION_ANNOTATION
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    VIDEO_OBJECT_TRACKING_ANNOTATION
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    VIDEO_OBJECT_DETECTION_ANNOTATION
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    VIDEO_EVENT_ANNOTATION
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    TEXT_CLASSIFICATION_ANNOTATION
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    TEXT_ENTITY_EXTRACTION_ANNOTATION
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    GENERAL_CLASSIFICATION_ANNOTATION
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
    "ANNOTATION_TYPE_UNSPECIFIED"
    ANNOTATION_TYPE_UNSPECIFIED
    "IMAGE_CLASSIFICATION_ANNOTATION"
    IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
    "IMAGE_BOUNDING_BOX_ANNOTATION"
    IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
    "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION"
    IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
    "IMAGE_BOUNDING_POLY_ANNOTATION"
    IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
    "IMAGE_POLYLINE_ANNOTATION"
    IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
    "IMAGE_SEGMENTATION_ANNOTATION"
    IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
    "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION"
    VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
    "VIDEO_OBJECT_TRACKING_ANNOTATION"
    VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
    "VIDEO_OBJECT_DETECTION_ANNOTATION"
    VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
    "VIDEO_EVENT_ANNOTATION"
    VIDEO_EVENT_ANNOTATIONVideo event annotation.
    "TEXT_CLASSIFICATION_ANNOTATION"
    TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
    "TEXT_ENTITY_EXTRACTION_ANNOTATION"
    TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
    "GENERAL_CLASSIFICATION_ANNOTATION"
    GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.

    GoogleCloudDatalabelingV1beta1InputConfigDataType, GoogleCloudDatalabelingV1beta1InputConfigDataTypeArgs

    DataTypeUnspecified
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    Image
    IMAGEAllowed for continuous evaluation.
    Video
    VIDEOVideo data type.
    Text
    TEXTAllowed for continuous evaluation.
    GeneralData
    GENERAL_DATAAllowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigDataTypeDataTypeUnspecified
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    GoogleCloudDatalabelingV1beta1InputConfigDataTypeImage
    IMAGEAllowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigDataTypeVideo
    VIDEOVideo data type.
    GoogleCloudDatalabelingV1beta1InputConfigDataTypeText
    TEXTAllowed for continuous evaluation.
    GoogleCloudDatalabelingV1beta1InputConfigDataTypeGeneralData
    GENERAL_DATAAllowed for continuous evaluation.
    DataTypeUnspecified
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    Image
    IMAGEAllowed for continuous evaluation.
    Video
    VIDEOVideo data type.
    Text
    TEXTAllowed for continuous evaluation.
    GeneralData
    GENERAL_DATAAllowed for continuous evaluation.
    DataTypeUnspecified
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    Image
    IMAGEAllowed for continuous evaluation.
    Video
    VIDEOVideo data type.
    Text
    TEXTAllowed for continuous evaluation.
    GeneralData
    GENERAL_DATAAllowed for continuous evaluation.
    DATA_TYPE_UNSPECIFIED
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    IMAGE
    IMAGEAllowed for continuous evaluation.
    VIDEO
    VIDEOVideo data type.
    TEXT
    TEXTAllowed for continuous evaluation.
    GENERAL_DATA
    GENERAL_DATAAllowed for continuous evaluation.
    "DATA_TYPE_UNSPECIFIED"
    DATA_TYPE_UNSPECIFIEDData type is unspecified.
    "IMAGE"
    IMAGEAllowed for continuous evaluation.
    "VIDEO"
    VIDEOVideo data type.
    "TEXT"
    TEXTAllowed for continuous evaluation.
    "GENERAL_DATA"
    GENERAL_DATAAllowed for continuous evaluation.

    GoogleCloudDatalabelingV1beta1InputConfigResponse, GoogleCloudDatalabelingV1beta1InputConfigResponseArgs

    AnnotationType string
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    BigquerySource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    ClassificationMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    DataType string
    Data type must be specifed when user tries to import data.
    GcsSource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1GcsSourceResponse
    Source located in Cloud Storage.
    TextMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextMetadataResponse
    Required for text import, as language code must be specified.
    AnnotationType string
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    BigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    ClassificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    DataType string
    Data type must be specifed when user tries to import data.
    GcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse
    Source located in Cloud Storage.
    TextMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse
    Required for text import, as language code must be specified.
    annotationType String
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    dataType String
    Data type must be specifed when user tries to import data.
    gcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse
    Source located in Cloud Storage.
    textMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse
    Required for text import, as language code must be specified.
    annotationType string
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    dataType string
    Data type must be specifed when user tries to import data.
    gcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse
    Source located in Cloud Storage.
    textMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse
    Required for text import, as language code must be specified.
    annotation_type str
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquery_source GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classification_metadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    data_type str
    Data type must be specifed when user tries to import data.
    gcs_source GoogleCloudDatalabelingV1beta1GcsSourceResponse
    Source located in Cloud Storage.
    text_metadata GoogleCloudDatalabelingV1beta1TextMetadataResponse
    Required for text import, as language code must be specified.
    annotationType String
    Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
    bigquerySource Property Map
    Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
    classificationMetadata Property Map
    Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
    dataType String
    Data type must be specifed when user tries to import data.
    gcsSource Property Map
    Source located in Cloud Storage.
    textMetadata Property Map
    Required for text import, as language code must be specified.

    GoogleCloudDatalabelingV1beta1SentimentConfig, GoogleCloudDatalabelingV1beta1SentimentConfigArgs

    EnableLabelSentimentSelection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    EnableLabelSentimentSelection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection Boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enable_label_sentiment_selection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection Boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

    GoogleCloudDatalabelingV1beta1SentimentConfigResponse, GoogleCloudDatalabelingV1beta1SentimentConfigResponseArgs

    EnableLabelSentimentSelection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    EnableLabelSentimentSelection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection Boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enable_label_sentiment_selection bool
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
    enableLabelSentimentSelection Boolean
    If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

    GoogleCloudDatalabelingV1beta1TextClassificationConfig, GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs

    AnnotationSpecSet string
    Annotation spec set resource name.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    SentimentConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1SentimentConfig
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    AnnotationSpecSet string
    Annotation spec set resource name.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    SentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfig
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    annotationSpecSet String
    Annotation spec set resource name.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfig
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    annotationSpecSet string
    Annotation spec set resource name.
    allowMultiLabel boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfig
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    annotation_spec_set str
    Annotation spec set resource name.
    allow_multi_label bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    sentiment_config GoogleCloudDatalabelingV1beta1SentimentConfig
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    annotationSpecSet String
    Annotation spec set resource name.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    sentimentConfig Property Map
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

    GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse, GoogleCloudDatalabelingV1beta1TextClassificationConfigResponseArgs

    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    AnnotationSpecSet string
    Annotation spec set resource name.
    SentimentConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1SentimentConfigResponse
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    AllowMultiLabel bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    AnnotationSpecSet string
    Annotation spec set resource name.
    SentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    annotationSpecSet String
    Annotation spec set resource name.
    sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    allowMultiLabel boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    annotationSpecSet string
    Annotation spec set resource name.
    sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    allow_multi_label bool
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    annotation_spec_set str
    Annotation spec set resource name.
    sentiment_config GoogleCloudDatalabelingV1beta1SentimentConfigResponse
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
    allowMultiLabel Boolean
    Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
    annotationSpecSet String
    Annotation spec set resource name.
    sentimentConfig Property Map
    Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

    GoogleCloudDatalabelingV1beta1TextMetadata, GoogleCloudDatalabelingV1beta1TextMetadataArgs

    LanguageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    LanguageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode String
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    language_code str
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode String
    The language of this text, as a BCP-47. Default value is en-US.

    GoogleCloudDatalabelingV1beta1TextMetadataResponse, GoogleCloudDatalabelingV1beta1TextMetadataResponseArgs

    LanguageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    LanguageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode String
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode string
    The language of this text, as a BCP-47. Default value is en-US.
    language_code str
    The language of this text, as a BCP-47. Default value is en-US.
    languageCode String
    The language of this text, as a BCP-47. Default value is en-US.

    GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs

    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details List<ImmutableDictionary<string, string>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details []map[string]string
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Integer
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String,String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code number
    The status code, which should be an enum value of google.rpc.Code.
    details {[key: string]: string}[]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code int
    The status code, which should be an enum value of google.rpc.Code.
    details Sequence[Mapping[str, str]]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message str
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Number
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi