1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1
  6. IndexEndpoint

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1.IndexEndpoint

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates an IndexEndpoint. Auto-naming is currently not supported for this resource.

    Create IndexEndpoint Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new IndexEndpoint(name: string, args: IndexEndpointArgs, opts?: CustomResourceOptions);
    @overload
    def IndexEndpoint(resource_name: str,
                      args: IndexEndpointArgs,
                      opts: Optional[ResourceOptions] = None)
    
    @overload
    def IndexEndpoint(resource_name: str,
                      opts: Optional[ResourceOptions] = None,
                      display_name: Optional[str] = None,
                      description: Optional[str] = None,
                      enable_private_service_connect: Optional[bool] = None,
                      encryption_spec: Optional[GoogleCloudAiplatformV1EncryptionSpecArgs] = None,
                      etag: Optional[str] = None,
                      labels: Optional[Mapping[str, str]] = None,
                      location: Optional[str] = None,
                      network: Optional[str] = None,
                      private_service_connect_config: Optional[GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs] = None,
                      project: Optional[str] = None,
                      public_endpoint_enabled: Optional[bool] = None)
    func NewIndexEndpoint(ctx *Context, name string, args IndexEndpointArgs, opts ...ResourceOption) (*IndexEndpoint, error)
    public IndexEndpoint(string name, IndexEndpointArgs args, CustomResourceOptions? opts = null)
    public IndexEndpoint(String name, IndexEndpointArgs args)
    public IndexEndpoint(String name, IndexEndpointArgs args, CustomResourceOptions options)
    
    type: google-native:aiplatform/v1:IndexEndpoint
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args IndexEndpointArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args IndexEndpointArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args IndexEndpointArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args IndexEndpointArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args IndexEndpointArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var indexEndpointResource = new GoogleNative.Aiplatform.V1.IndexEndpoint("indexEndpointResource", new()
    {
        DisplayName = "string",
        Description = "string",
        EncryptionSpec = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1EncryptionSpecArgs
        {
            KmsKeyName = "string",
        },
        Etag = "string",
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Network = "string",
        PrivateServiceConnectConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs
        {
            EnablePrivateServiceConnect = false,
            ProjectAllowlist = new[]
            {
                "string",
            },
        },
        Project = "string",
        PublicEndpointEnabled = false,
    });
    
    example, err := aiplatform.NewIndexEndpoint(ctx, "indexEndpointResource", &aiplatform.IndexEndpointArgs{
    DisplayName: pulumi.String("string"),
    Description: pulumi.String("string"),
    EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1EncryptionSpecArgs{
    KmsKeyName: pulumi.String("string"),
    },
    Etag: pulumi.String("string"),
    Labels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    Location: pulumi.String("string"),
    Network: pulumi.String("string"),
    PrivateServiceConnectConfig: &aiplatform.GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs{
    EnablePrivateServiceConnect: pulumi.Bool(false),
    ProjectAllowlist: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    Project: pulumi.String("string"),
    PublicEndpointEnabled: pulumi.Bool(false),
    })
    
    var indexEndpointResource = new IndexEndpoint("indexEndpointResource", IndexEndpointArgs.builder()
        .displayName("string")
        .description("string")
        .encryptionSpec(GoogleCloudAiplatformV1EncryptionSpecArgs.builder()
            .kmsKeyName("string")
            .build())
        .etag("string")
        .labels(Map.of("string", "string"))
        .location("string")
        .network("string")
        .privateServiceConnectConfig(GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs.builder()
            .enablePrivateServiceConnect(false)
            .projectAllowlist("string")
            .build())
        .project("string")
        .publicEndpointEnabled(false)
        .build());
    
    index_endpoint_resource = google_native.aiplatform.v1.IndexEndpoint("indexEndpointResource",
        display_name="string",
        description="string",
        encryption_spec=google_native.aiplatform.v1.GoogleCloudAiplatformV1EncryptionSpecArgs(
            kms_key_name="string",
        ),
        etag="string",
        labels={
            "string": "string",
        },
        location="string",
        network="string",
        private_service_connect_config=google_native.aiplatform.v1.GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs(
            enable_private_service_connect=False,
            project_allowlist=["string"],
        ),
        project="string",
        public_endpoint_enabled=False)
    
    const indexEndpointResource = new google_native.aiplatform.v1.IndexEndpoint("indexEndpointResource", {
        displayName: "string",
        description: "string",
        encryptionSpec: {
            kmsKeyName: "string",
        },
        etag: "string",
        labels: {
            string: "string",
        },
        location: "string",
        network: "string",
        privateServiceConnectConfig: {
            enablePrivateServiceConnect: false,
            projectAllowlist: ["string"],
        },
        project: "string",
        publicEndpointEnabled: false,
    });
    
    type: google-native:aiplatform/v1:IndexEndpoint
    properties:
        description: string
        displayName: string
        encryptionSpec:
            kmsKeyName: string
        etag: string
        labels:
            string: string
        location: string
        network: string
        privateServiceConnectConfig:
            enablePrivateServiceConnect: false
            projectAllowlist:
                - string
        project: string
        publicEndpointEnabled: false
    

    IndexEndpoint Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The IndexEndpoint resource accepts the following input properties:

    DisplayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    Description string
    The description of the IndexEndpoint.
    EnablePrivateServiceConnect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1EncryptionSpec
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    Etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    PrivateServiceConnectConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1PrivateServiceConnectConfig
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    Project string
    PublicEndpointEnabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    DisplayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    Description string
    The description of the IndexEndpoint.
    EnablePrivateServiceConnect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    EncryptionSpec GoogleCloudAiplatformV1EncryptionSpecArgs
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    Etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels map[string]string
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    PrivateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    Project string
    PublicEndpointEnabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    displayName String
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    description String
    The description of the IndexEndpoint.
    enablePrivateServiceConnect Boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec GoogleCloudAiplatformV1EncryptionSpec
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag String
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String,String>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    network String
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfig
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    project String
    publicEndpointEnabled Boolean
    Optional. If true, the deployed index will be accessible through public endpoint.
    displayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    description string
    The description of the IndexEndpoint.
    enablePrivateServiceConnect boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec GoogleCloudAiplatformV1EncryptionSpec
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location string
    network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfig
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    project string
    publicEndpointEnabled boolean
    Optional. If true, the deployed index will be accessible through public endpoint.
    display_name str
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    description str
    The description of the IndexEndpoint.
    enable_private_service_connect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryption_spec GoogleCloudAiplatformV1EncryptionSpecArgs
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag str
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location str
    network str
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    private_service_connect_config GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    project str
    public_endpoint_enabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    displayName String
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    description String
    The description of the IndexEndpoint.
    enablePrivateServiceConnect Boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec Property Map
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag String
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    network String
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig Property Map
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    project String
    publicEndpointEnabled Boolean
    Optional. If true, the deployed index will be accessible through public endpoint.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the IndexEndpoint resource produces the following output properties:

    CreateTime string
    Timestamp when this IndexEndpoint was created.
    DeployedIndexes List<Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1DeployedIndexResponse>
    The indexes deployed in this endpoint.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The resource name of the IndexEndpoint.
    PublicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    UpdateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    CreateTime string
    Timestamp when this IndexEndpoint was created.
    DeployedIndexes []GoogleCloudAiplatformV1DeployedIndexResponse
    The indexes deployed in this endpoint.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The resource name of the IndexEndpoint.
    PublicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    UpdateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime String
    Timestamp when this IndexEndpoint was created.
    deployedIndexes List<GoogleCloudAiplatformV1DeployedIndexResponse>
    The indexes deployed in this endpoint.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The resource name of the IndexEndpoint.
    publicEndpointDomainName String
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    updateTime String
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime string
    Timestamp when this IndexEndpoint was created.
    deployedIndexes GoogleCloudAiplatformV1DeployedIndexResponse[]
    The indexes deployed in this endpoint.
    id string
    The provider-assigned unique ID for this managed resource.
    name string
    The resource name of the IndexEndpoint.
    publicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    updateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    create_time str
    Timestamp when this IndexEndpoint was created.
    deployed_indexes Sequence[GoogleCloudAiplatformV1DeployedIndexResponse]
    The indexes deployed in this endpoint.
    id str
    The provider-assigned unique ID for this managed resource.
    name str
    The resource name of the IndexEndpoint.
    public_endpoint_domain_name str
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    update_time str
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime String
    Timestamp when this IndexEndpoint was created.
    deployedIndexes List<Property Map>
    The indexes deployed in this endpoint.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The resource name of the IndexEndpoint.
    publicEndpointDomainName String
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    updateTime String
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.

    Supporting Types

    GoogleCloudAiplatformV1AutomaticResourcesResponse, GoogleCloudAiplatformV1AutomaticResourcesResponseArgs

    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    MinReplicaCount int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    MinReplicaCount int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount Integer
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount Integer
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount number
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    max_replica_count int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    min_replica_count int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount Number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount Number
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.

    GoogleCloudAiplatformV1AutoscalingMetricSpecResponse, GoogleCloudAiplatformV1AutoscalingMetricSpecResponseArgs

    MetricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    Target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    MetricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    Target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName String
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target Integer
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target number
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metric_name str
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName String
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target Number
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

    GoogleCloudAiplatformV1DedicatedResourcesResponse, GoogleCloudAiplatformV1DedicatedResourcesResponseArgs

    AutoscalingMetricSpecs List<Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1AutoscalingMetricSpecResponse>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    MachineSpec Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    MinReplicaCount int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    AutoscalingMetricSpecs []GoogleCloudAiplatformV1AutoscalingMetricSpecResponse
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    MachineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    MinReplicaCount int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs List<GoogleCloudAiplatformV1AutoscalingMetricSpecResponse>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount Integer
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount Integer
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs GoogleCloudAiplatformV1AutoscalingMetricSpecResponse[]
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount number
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscaling_metric_specs Sequence[GoogleCloudAiplatformV1AutoscalingMetricSpecResponse]
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machine_spec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    max_replica_count int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    min_replica_count int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs List<Property Map>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec Property Map
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount Number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount Number
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

    GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse, GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponseArgs

    AllowedIssuers List<string>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    Audiences List<string>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    AllowedIssuers []string
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    Audiences []string
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers List<String>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences List<String>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers string[]
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences string[]
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowed_issuers Sequence[str]
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences Sequence[str]
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers List<String>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences List<String>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.

    GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse, GoogleCloudAiplatformV1DeployedIndexAuthConfigResponseArgs

    AuthProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    auth_provider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider Property Map
    Defines the authentication provider that the DeployedIndex uses.

    GoogleCloudAiplatformV1DeployedIndexResponse, GoogleCloudAiplatformV1DeployedIndexResponseArgs

    AutomaticResources Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    CreateTime string
    Timestamp when the DeployedIndex was created.
    DedicatedResources Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    DeployedIndexAuthConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    DeploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    DisplayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    EnableAccessLogging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    Index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    IndexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    PrivateEndpoints Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    ReservedIpRanges List<string>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    AutomaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    CreateTime string
    Timestamp when the DeployedIndex was created.
    DedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    DeployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    DeploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    DisplayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    EnableAccessLogging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    Index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    IndexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    PrivateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    ReservedIpRanges []string
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime String
    Timestamp when the DeployedIndex was created.
    dedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup String
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName String
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging Boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index String
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime String
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges List<String>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime string
    Timestamp when the DeployedIndex was created.
    dedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges string[]
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automatic_resources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    create_time str
    Timestamp when the DeployedIndex was created.
    dedicated_resources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployed_index_auth_config GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deployment_group str
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    display_name str
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enable_access_logging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index str
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    index_sync_time str
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    private_endpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reserved_ip_ranges Sequence[str]
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources Property Map
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime String
    Timestamp when the DeployedIndex was created.
    dedicatedResources Property Map
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig Property Map
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup String
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName String
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging Boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index String
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime String
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints Property Map
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges List<String>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.

    GoogleCloudAiplatformV1EncryptionSpec, GoogleCloudAiplatformV1EncryptionSpecArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1EncryptionSpecResponse, GoogleCloudAiplatformV1EncryptionSpecResponseArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1IndexPrivateEndpointsResponse, GoogleCloudAiplatformV1IndexPrivateEndpointsResponseArgs

    MatchGrpcAddress string
    The ip address used to send match gRPC requests.
    ServiceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    MatchGrpcAddress string
    The ip address used to send match gRPC requests.
    ServiceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress String
    The ip address used to send match gRPC requests.
    serviceAttachment String
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress string
    The ip address used to send match gRPC requests.
    serviceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    match_grpc_address str
    The ip address used to send match gRPC requests.
    service_attachment str
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress String
    The ip address used to send match gRPC requests.
    serviceAttachment String
    The name of the service attachment resource. Populated if private service connect is enabled.

    GoogleCloudAiplatformV1MachineSpecResponse, GoogleCloudAiplatformV1MachineSpecResponseArgs

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type str
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1PrivateServiceConnectConfig, GoogleCloudAiplatformV1PrivateServiceConnectConfigArgs

    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist List<string>
    A list of Projects from which the forwarding rule will target the service attachment.
    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist []string
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist string[]
    A list of Projects from which the forwarding rule will target the service attachment.
    enable_private_service_connect bool
    If true, expose the IndexEndpoint via private service connect.
    project_allowlist Sequence[str]
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.

    GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse, GoogleCloudAiplatformV1PrivateServiceConnectConfigResponseArgs

    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist List<string>
    A list of Projects from which the forwarding rule will target the service attachment.
    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist []string
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist string[]
    A list of Projects from which the forwarding rule will target the service attachment.
    enable_private_service_connect bool
    If true, expose the IndexEndpoint via private service connect.
    project_allowlist Sequence[str]
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi