Kafka – Confluent s3 connect – Connector fails to connect to s3

I’m trying to build a demo application where I read into a kafka topic from a public source and write this data into s3. Exactly as indicated in this link – https://www.confluent.fr/blog/apache-kafka-to-amazon-s3-exactly-once.

My s3 connector fails connecting to s3 even though I have valid credentials as part of my environment variables and as part of my ~/.aws/credentials file.

I know that the credentials are fine, as I’m able to access the bucket using awscli.

Tried two ways of configuring the AWS provider (as part of the connector.properties)


When run with the DefaultAWSCredentialsProviderChain, the connector seems to be trying to connect with secret and access keys which are not what is set in environment variables (and ~/.aws/credentials). and, fails with the error :

org.apache.kafka.connect.errors.ConnectException: Exiting
WorkerSinkTask due to unrecoverable exception

com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key
Id you provided does not exist in our records. (Service: Amazon S3;
Status Code: 403; Error Code: InvalidAccessKeyId; Request ID:
8FF2B58289B657EA), S3 Extended Request ID:

When run with the EnvironmentVariableCredentialsProvider, Its not able to see the environment variables. and, connector fails with the following error:

“trace”: “org.apache.kafka.connect.errors.ConnectException:
com.amazonaws.SdkClientException: Unable to load AWS credentials from
environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and

How do I get the s3 connector to see my environment variables??

Leave a Reply

Notify of