Top Related Projects
The official AWS SDK for Java 1.x. The AWS SDK for Java 2.x is available here: https://github.com/aws/aws-sdk-java-v2/
Apache Pulsar - distributed pub-sub messaging system
Mirror of Apache Kafka
Apache Hadoop
Quick Overview
The googleapis/google-cloud-java
repository is a collection of Java client libraries for accessing various Google Cloud services, such as Cloud Storage, Datastore, Pub/Sub, and more. These libraries provide a convenient and idiomatic way for Java developers to interact with Google Cloud Platform (GCP) services.
Pros
- Comprehensive Coverage: The repository covers a wide range of Google Cloud services, allowing developers to easily integrate their Java applications with various GCP offerings.
- Idiomatic API: The libraries provide a Java-friendly API that closely matches the underlying GCP services, making it easier for developers to understand and use.
- Active Development: The project is actively maintained by the Google Cloud team, with regular updates and improvements to the libraries.
- Extensive Documentation: The project comes with detailed documentation, including usage examples and best practices, which can help developers get started quickly.
Cons
- Dependency Management: Depending on the number of GCP services used, the project can introduce a significant number of dependencies, which may complicate project setup and management.
- Learning Curve: Developers new to GCP may need to invest time in understanding the various services and how to effectively use the corresponding Java libraries.
- Limited Flexibility: The libraries are tightly coupled with the underlying GCP services, which may limit the ability to customize or extend the functionality beyond what is provided by the libraries.
- Performance Concerns: Depending on the specific use case and the volume of data being processed, the overhead introduced by the libraries may impact the overall performance of the application.
Code Examples
Here are a few code examples demonstrating the usage of the google-cloud-java
libraries:
Cloud Storage Example
// Import the necessary classes
import com.google.cloud.storage.Blob;
import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
public class CloudStorageExample {
public static void main(String[] args) {
// Create a Storage client
Storage storage = StorageOptions.getDefaultInstance().getService();
// Upload a file to a Google Cloud Storage bucket
BlobId blobId = BlobId.of("my-bucket", "file.txt");
Blob blob = storage.create(BlobId.of("my-bucket", "file.txt"), "Hello, Cloud Storage!".getBytes());
System.out.println("File uploaded: " + blob.getMediaLink());
}
}
This example demonstrates how to use the google-cloud-storage
library to upload a file to a Google Cloud Storage bucket.
Datastore Example
// Import the necessary classes
import com.google.cloud.datastore.Datastore;
import com.google.cloud.datastore.DatastoreOptions;
import com.google.cloud.datastore.Entity;
import com.google.cloud.datastore.Key;
public class DatastoreExample {
public static void main(String[] args) {
// Create a Datastore client
Datastore datastore = DatastoreOptions.getDefaultInstance().getService();
// Create a new entity in Datastore
Key taskKey = datastore.newKeyFactory().setKind("Task").newKey("task1");
Entity task = Entity.newBuilder(taskKey)
.set("description", "Buy milk")
.set("done", false)
.build();
datastore.put(task);
System.out.println("Entity created: " + task.getKey().getName());
}
}
This example demonstrates how to use the google-cloud-datastore
library to create a new entity in Google Cloud Datastore.
Pub/Sub Example
// Import the necessary classes
import com.google.cloud.pubsub.v1.Publisher;
import com.google.cloud.pubsub.v1.SubscriptionAdminClient;
import com.google.protobuf.ByteString;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.TopicName;
public class PubSubExample {
public static void main(String[] args) {
// Create a Pub
Competitor Comparisons
The official AWS SDK for Java 1.x. The AWS SDK for Java 2.x is available here: https://github.com/aws/aws-sdk-java-v2/
Pros of aws-sdk-java
- More comprehensive coverage of AWS services
- Longer history and larger community support
- Better documentation and extensive examples
Cons of aws-sdk-java
- Larger library size, potentially increasing application footprint
- More complex API structure, steeper learning curve
- Slower release cycle for new features compared to google-cloud-java
Code Comparison
aws-sdk-java:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_WEST_2)
.build();
s3Client.putObject(bucketName, key, content);
google-cloud-java:
Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of(bucketName, objectName);
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
storage.create(blobInfo, content.getBytes(UTF_8));
Both SDKs provide similar functionality for interacting with their respective cloud services. The aws-sdk-java tends to have more verbose method names and requires explicit client creation, while google-cloud-java often uses a more concise, fluent API style. The google-cloud-java example demonstrates a more straightforward approach to creating and uploading objects, whereas the aws-sdk-java example requires separate steps for client initialization and object creation.
Apache Pulsar - distributed pub-sub messaging system
Pros of Pulsar
- Open-source, community-driven project with Apache governance
- Supports multi-tenancy and geo-replication out of the box
- Offers both streaming and queuing messaging models
Cons of Pulsar
- Steeper learning curve compared to Google Cloud Java client libraries
- Less integrated with Google Cloud ecosystem and services
- May require more setup and configuration for basic use cases
Code Comparison
Pulsar producer example:
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://localhost:6650")
.build();
Producer<byte[]> producer = client.newProducer()
.topic("my-topic")
.create();
producer.send("Hello, Pulsar!".getBytes());
Google Cloud Pub/Sub example:
TopicName topicName = TopicName.of("my-project", "my-topic");
Publisher publisher = Publisher.newBuilder(topicName).build();
ByteString data = ByteString.copyFromUtf8("Hello, Pub/Sub!");
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
publisher.publish(pubsubMessage);
Both repositories provide client libraries for messaging systems, but they serve different purposes. Pulsar is a standalone messaging and streaming platform, while google-cloud-java offers client libraries for various Google Cloud services, including Pub/Sub for messaging. The choice between them depends on specific project requirements and infrastructure preferences.
Mirror of Apache Kafka
Pros of Kafka
- Highly scalable and distributed streaming platform
- Supports real-time data processing and event-driven architectures
- Large and active open-source community
Cons of Kafka
- Steeper learning curve and more complex setup
- Requires additional infrastructure management
- May be overkill for simpler messaging needs
Code Comparison
Kafka producer example:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
Google Cloud Pub/Sub publisher example:
TopicName topicName = TopicName.of("project-id", "topic-id");
Publisher publisher = Publisher.newBuilder(topicName).build();
ByteString data = ByteString.copyFromUtf8("Hello, Pub/Sub!");
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
publisher.publish(pubsubMessage);
Both repositories provide Java client libraries for their respective messaging systems. Kafka offers a more comprehensive distributed streaming platform, while Google Cloud Java focuses on integration with various Google Cloud services, including Pub/Sub for messaging. The choice between them depends on specific project requirements, existing infrastructure, and scalability needs.
Apache Hadoop
Pros of Hadoop
- Open-source and community-driven development
- Extensive ecosystem with many related projects (e.g., Hive, HBase, Spark)
- Designed for large-scale distributed processing and storage
Cons of Hadoop
- Steeper learning curve and more complex setup
- Requires more manual configuration and maintenance
- May be overkill for smaller-scale data processing tasks
Code Comparison
Hadoop (Java MapReduce example):
public class WordCount extends Configured implements Tool {
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
// ... (mapper implementation)
}
// ... (reducer and main method)
}
Google Cloud Java (BigQuery example):
TableResult result = bigquery.query(QueryJobConfiguration.newBuilder(
"SELECT word, COUNT(*) as count FROM my_dataset.my_table GROUP BY word")
.setUseLegacySql(false)
.build());
for (FieldValueList row : result.iterateAll()) {
String word = row.get("word").getStringValue();
long count = row.get("count").getLongValue();
System.out.printf("%s: %d%n", word, count);
}
The Hadoop example shows a typical MapReduce job structure, while the Google Cloud Java example demonstrates a simpler BigQuery operation. Hadoop offers more control over the distributed processing, while Google Cloud Java provides a higher-level abstraction for cloud-based data operations.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Google Cloud Java Client Libraries
Java idiomatic client for Google Cloud Platform services.
Supported APIs
Libraries are available on GitHub and Maven Central for developing Java applications that interact with individual Google Cloud services:
If the service is not listed, google-api-java-client interfaces with additional Google Cloud APIs using a legacy REST interface.
When building Java applications, preference should be given to the libraries listed in the table.
Specifying a Project ID
Most google-cloud
libraries require a project ID. There are multiple ways to specify this project ID.
- When using
google-cloud
libraries from within Compute/App Engine, there's no need to specify a project ID. It is automatically inferred from the production environment. - When using
google-cloud
elsewhere, you can do one of the following:
-
Supply the project ID when building the service options. For example, to use Datastore from a project with ID "PROJECT_ID", you can write:
Datastore datastore = DatastoreOptions.newBuilder().setProjectId("PROJECT_ID").build().getService();
-
Specify the environment variable
GOOGLE_CLOUD_PROJECT
to be your desired project ID. -
Set the project ID using the Google Cloud SDK. To use the SDK, download the SDK if you haven't already, and set the project ID from the command line. For example:
gcloud config set project PROJECT_ID
google-cloud
determines the project ID from the following sources in the listed order, stopping once it finds a value:
- The project ID supplied when building the service options
- Project ID specified by the environment variable
GOOGLE_CLOUD_PROJECT
- The App Engine / Compute Engine project ID
- The project ID specified in the JSON credentials file pointed by the
GOOGLE_APPLICATION_CREDENTIALS
environment variable - The Google Cloud SDK project ID
In cases where the library may expect a project ID explicitly, we provide a helper that can provide the inferred project ID:
import com.google.cloud.ServiceOptions;
...
String projectId = ServiceOptions.getDefaultProjectId();
Authentication
google-cloud-java
uses
https://github.com/googleapis/google-auth-library-java
to authenticate requests. google-auth-library-java
supports a wide range of authentication types;
see the project's README
and javadoc for more
details.
Google Cloud Platform environment
When using Google Cloud libraries from a Google Cloud Platform environment such as Compute Engine, Kubernetes Engine, or App Engine, no additional authentication steps are necessary.
For example:
Storage storage = StorageOptions.getDefaultInstance().getService();
or:
CloudTasksClient cloudTasksClient = CloudTasksClient.create();
Other environments
Using a service account (recommended)
-
After downloading that key, you must do one of the following:
- Define the environment variable GOOGLE_APPLICATION_CREDENTIALS to be the location of the key. For example:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key.json
- Supply the JSON credentials file when building the service options. For example, this Storage object has the necessary permissions to interact with your Google Cloud Storage data:
Storage storage = StorageOptions.newBuilder() .setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("/path/to/my/key.json"))) .build() .getService();
Local development/testing
If running locally for development/testing, you can use the Google Cloud SDK.
Create Application Default Credentials with gcloud auth application-default login
, and then
google-cloud
will automatically detect such credentials.
Existing OAuth2 access token
If you already have an OAuth2 access token, you can use it to authenticate (notice that in this case, the access token will not be automatically refreshed):
Credentials credentials = GoogleCredentials.create(new AccessToken(accessToken, expirationTime));
Storage storage = StorageOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
or:
Credentials credentials = GoogleCredentials.create(new AccessToken(accessToken, expirationTime));
CloudTasksSettings cloudTasksSettings = CloudTasksSettings.newBuilder()
.setCredentialProvider(FixedCredentialsProvider.create(credentials))
.build();
CloudTasksClient cloudTasksClient = CloudTasksClient.create(cloudTasksSettings);
Application Default Credentials
If no credentials are provided, google-cloud
will attempt to detect them from the environment
using GoogleCredentials.getApplicationDefault()
which will search for Application Default
Credentials in the following locations (in order):
- The credentials file pointed to by the
GOOGLE_APPLICATION_CREDENTIALS
environment variable - Credentials provided by the Google Cloud SDK
gcloud auth application-default login
command - Google App Engine built-in credentials
- Google Cloud Shell built-in credentials
- Google Compute Engine built-in credentials
Authenticating with an API Key
Authenticating with API Keys is supported by a handful of Google Cloud APIs.
We are actively exploring ways to improve the API Key experience. Currently, to use an API Key with a Java client library, you need to set the header for the relevant service Client manually.
For example, to set the API Key with the Language service:
public LanguageServiceClient createGrpcClientWithApiKey(String apiKey) throws Exception {
// Manually set the api key via the header
Map<String, String> header = new HashMap<String, String>() { {put("x-goog-api-key", apiKey);}};
FixedHeaderProvider headerProvider = FixedHeaderProvider.create(header);
// Create the client
TransportChannelProvider transportChannelProvider = InstantiatingGrpcChannelProvider.newBuilder().setHeaderProvider(headerProvider).build();
LanguageServiceSettings settings = LanguageServiceSettings.newBuilder().setTransportChannelProvider(transportChannelProvider).build();
LanguageServiceClient client = LanguageServiceClient.create(settings);
return client;
}
An example instantiation with the Language Client using rest:
public LanguageServiceClient createRestClientWithApiKey(String apiKey) throws Exception {
// Manually set the api key header
Map<String, String> header = new HashMap<String, String>() { {put("x-goog-api-key", apiKey);}};
FixedHeaderProvider headerProvider = FixedHeaderProvider.create(header);
// Create the client
TransportChannelProvider transportChannelProvider = InstantiatingHttpJsonChannelProvider.newBuilder().setHeaderProvider(headerProvider).build();
LanguageServiceSettings settings = LanguageServiceSettings.newBuilder().setTransportChannelProvider(transportChannelProvider).build();
LanguageServiceClient client = LanguageServiceClient.create(settings);
return client;
}
Troubleshooting
To get help, follow the instructions in the Troubleshooting document.
Configuring a Proxy
Google Cloud client libraries use HTTPS and gRPC in underlying communication
with the services.
In both protocols, you can configure a proxy using https.proxyHost
and (optional) https.proxyPort
properties.
gRPC Custom Proxy Configuration
For a more custom proxy with gRPC, you will need supply a ProxyDetector
to
the ManagedChannelBuilder
:
import com.google.api.core.ApiFunction;
import com.google.api.gax.rpc.TransportChannelProvider;
import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.CloudTasksSettings;
import com.google.cloud.tasks.v2.stub.CloudTasksStubSettings;
import io.grpc.HttpConnectProxiedSocketAddress;
import io.grpc.ManagedChannelBuilder;
import io.grpc.ProxiedSocketAddress;
import io.grpc.ProxyDetector;
import javax.annotation.Nullable;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
public CloudTasksClient getService() throws IOException {
TransportChannelProvider transportChannelProvider =
CloudTasksStubSettings.defaultGrpcTransportProviderBuilder()
.setChannelConfigurator(
new ApiFunction<ManagedChannelBuilder, ManagedChannelBuilder>() {
@Override
public ManagedChannelBuilder apply(ManagedChannelBuilder managedChannelBuilder) {
return managedChannelBuilder.proxyDetector(
new ProxyDetector() {
@Nullable
@Override
public ProxiedSocketAddress proxyFor(SocketAddress socketAddress)
throws IOException {
return HttpConnectProxiedSocketAddress.newBuilder()
.setUsername(PROXY_USERNAME)
.setPassword(PROXY_PASSWORD)
.setProxyAddress(new InetSocketAddress(PROXY_HOST, PROXY_PORT))
.setTargetAddress((InetSocketAddress) socketAddress)
.build();
}
});
}
})
.build();
CloudTasksSettings cloudTasksSettings =
CloudTasksSettings.newBuilder()
.setTransportChannelProvider(transportChannelProvider)
.build();
return CloudTasksClient.create(cloudTasksSettings);
}
Long Running Operations
Long running operations (LROs) are often used for API calls that are expected to
take a long time to complete (i.e. provisioning a GCE instance or a Dataflow pipeline).
The initial API call creates an "operation" on the server and returns an Operation ID
to track its progress. LRO RPCs have the suffix Async
appended to the call name
(i.e. clusterControllerClient.createClusterAsync()
)
Our generated clients provide a nice interface for starting the operation and
then waiting for the operation to complete. This is accomplished by returning an
OperationFuture
.
When calling get()
on the OperationFuture
, the client library will poll the operation to
check the operation's status.
For example, take a sample createCluster
Operation in google-cloud-dataproc v4.20.0:
try (ClusterControllerClient clusterControllerClient = ClusterControllerClient.create()) {
CreateClusterRequest request =
CreateClusterRequest.newBuilder()
.setProjectId("{PROJECT_ID}")
.setRegion("{REGION}")
.setCluster(Cluster.newBuilder().build())
.setRequestId("{REQUEST_ID}")
.setActionOnFailedPrimaryWorkers(FailureAction.forNumber(0))
.build();
OperationFuture<Cluster, ClusterOperationMetadata> future =
clusterControllerClient.createClusterOperationCallable().futureCall(request);
// Do something.
Cluster response = future.get();
} catch (CancellationException e) {
// Exceeded the default RPC timeout without the Operation completing.
// Library is no longer polling for the Operation status. Consider
// increasing the timeout.
}
LRO Timeouts
The polling operations have a default timeout that varies from service to service.
The library will throw a java.util.concurrent.CancellationException
with the message:
Task was cancelled.
if the timeout exceeds the operation. A CancellationException
does not mean that the backend GCP Operation was cancelled. This exception is thrown from the
client library when it has exceeded the total timeout without receiving a successful status from the operation.
Our client libraries respect the configured values set in the OperationTimedPollAlgorithm for each RPC.
Note: The client library handles the Operation's polling mechanism for you. By default, there is no need to manually poll the status yourself.
Default LRO Values
Each LRO RPC has a set of pre-configured default values. You can find these values by
searching in each Client's StubSettings
's class. The default LRO settings are initialized
inside the initDefaults()
method in the nested Builder class.
For example, in google-cloud-aiplatform v3.24.0, the default OperationTimedPollAlgorithm has these default values:
OperationTimedPollAlgorithm.create(
RetrySettings.newBuilder()
.setInitialRetryDelay(Duration.ofMillis(5000L))
.setRetryDelayMultiplier(1.5)
.setMaxRetryDelay(Duration.ofMillis(45000L))
.setInitialRpcTimeout(Duration.ZERO)
.setRpcTimeoutMultiplier(1.0)
.setMaxRpcTimeout(Duration.ZERO)
.setTotalTimeout(Duration.ofMillis(300000L))
.build())
Both retries and LROs share the same RetrySettings class. Note the corresponding link:
- Total Timeout (Max Time allowed for polling): 5 minutes
- Initial Retry Delay (Initial delay before first poll): 5 seconds
- Max Retry Delay (Maximum delay between each poll): 45 seconds
- Retry Delay Multiplier (Multiplier value to increase the poll delay): 1.5
The RPC Timeout values have no use in LROs and can be omitted or set to the default values
(Duration.ZERO
for Timeouts or 1.0
for the multiplier).
Configuring LRO Timeouts
To configure the LRO values, create an OperationTimedPollAlgorithm object and update the RPC's polling algorithm. For example:
ClusterControllerSettings.Builder settingsBuilder = ClusterControllerSettings.newBuilder();
TimedRetryAlgorithm timedRetryAlgorithm = OperationTimedPollAlgorithm.create(
RetrySettings.newBuilder()
.setInitialRetryDelay(Duration.ofMillis(500L))
.setRetryDelayMultiplier(1.5)
.setMaxRetryDelay(Duration.ofMillis(5000L))
.setInitialRpcTimeout(Duration.ZERO) // ignored
.setRpcTimeoutMultiplier(1.0) // ignored
.setMaxRpcTimeout(Duration.ZERO) // ignored
.setTotalTimeout(Duration.ofHours(24L)) // set polling timeout to 24 hours
.build());
settingsBuilder.createClusterOperationSettings()
.setPollingAlgorithm(timedRetryAlgorithm);
ClusterControllerClient clusterControllerClient = ClusterControllerClient.create(settingsBuilder.build());
Note: The configuration above only modifies the LRO values for the createClusterOperation
RPC.
The other RPCs in the Client will still use each RPC's pre-configured LRO values.
Managing Dependencies
If you are using more than one Google Cloud client library, we recommend you use one of our Bill of Material (BOM) artifacts to help manage dependency versions. For more information, see Using the Cloud Client Libraries.
Java Versions
Java 8 or above is required for using the clients in this repository.
Supported Platforms
Clients in this repository use either HTTP or gRPC for the transport layer. All HTTP-based clients should work in all environments.
For clients that use gRPC, the supported platforms are constrained by the platforms that Forked Tomcat Native supports, which for architectures means only x86_64, and for operating systems means Mac OS X, Windows, and Linux. Additionally, gRPC constrains the use of platforms with threading restrictions.
Thus, the following are not supported:
- Android
- Consider Firebase, which includes many of these APIs.
- It is possible to use these libraries in many cases, although it is unsupported. You can find examples, such as this one, in this example repository but consider the risks carefully before using these libraries in an application.
- Raspberry Pi (since it runs on the ARM architecture)
- Google App Engine Standard Java 7
The following environments should work (among others):
- standalone Windows on x86_64
- standalone Mac OS X on x86_64
- standalone Linux on x86_64
- Google Compute Engine (GCE)
- Google Container Engine (GKE)
- Google App Engine Standard Java 8 (GAE Std J8)
- Google App Engine Flex (GAE Flex)
- Alpine Linux (Java 11+)
Testing
This library provides tools to help write tests for code that uses google-cloud services.
See TESTING to read more about using our testing helpers.
Versioning
This library follows Semantic Versioning, with some additional qualifications:
-
Components marked with
@BetaApi
or@Experimental
are considered to be "0.x" features inside a "1.x" library. This means they can change between minor and patch releases in incompatible ways. These features should not be used by any library "B" that itself has consumers, unless the components of library B that use@BetaApi
features are also marked with@BetaApi
. Features marked as@BetaApi
are on a path to eventually become "1.x" features with the marker removed.Special exception for google-cloud-java: google-cloud-java is allowed to depend on
@BetaApi
features in gax-java without declaring the consuming code@BetaApi
, because gax-java and google-cloud-java move in step with each other. For this reason, gax-java should not be used independently of google-cloud-java. -
Components marked with
@InternalApi
are technically public, but only because of the limitations of Java's access modifiers. For the purposes of semver, they should be considered private. -
Interfaces marked with
@InternalExtensionOnly
are public, but should only be implemented by internal classes. For the purposes of semver, we reserve the right to add to these interfaces without default implementations (for Java 7).
Please note these clients are currently under active development. Any release versioned 0.x.y is subject to backwards incompatible changes at any time.
Stable
Libraries defined at a Stable quality level are expected to be stable and all updates in the libraries are guaranteed to be backwards-compatible. Any backwards-incompatible changes will lead to the major version increment (1.x.y -> 2.0.0).
Preview
Libraries defined at a Preview quality level are still a work-in-progress and are more likely to get backwards-incompatible updates. Additionally, it's possible for Preview libraries to get deprecated and deleted before ever being promoted to Preview or Stable.
IDE Plugins
If you're using IntelliJ or Eclipse, you can add client libraries to your project using these IDE plugins:
Besides adding client libraries, the plugins provide additional functionality, such as service account key management. Refer to the documentation for each plugin for more details.
These client libraries can be used on App Engine standard for Java 8 runtime and App Engine flexible (including the Compat runtime). Most of the libraries do not work on the App Engine standard for Java 7 runtime. However, Datastore, Storage, and Bigquery should work.
Contributing
Contributions to this library are always welcome and highly encouraged.
See google-cloud
's CONTRIBUTING documentation and the shared documentation for more information on how to get started.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.
License
Apache 2.0 - See LICENSE for more information.
Top Related Projects
The official AWS SDK for Java 1.x. The AWS SDK for Java 2.x is available here: https://github.com/aws/aws-sdk-java-v2/
Apache Pulsar - distributed pub-sub messaging system
Mirror of Apache Kafka
Apache Hadoop
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot