Java and IAM Roles

By Chris Maki | December 17, 2018

In my first post, IAM Roles in AWS you created an ec2 instance and directly accessed a restricted S3 bucket. Today, you’ll create a Java application, which will use an ec2 role to access the same restricted s3 bucket.

Here’s what you’re going to do:

  1. Create a simple Java application

  2. Create an S3 bucket

  3. Create a customer managed policy

  4. Create an IAM role, using the customer managed policy, to manage access to the S3 bucket

  5. Add a bucket policy to your S3 bucket

  6. Create an EC2 instance

  7. Run your Java app to verify access

That’s a lot of stuff, the details for creating an S3 bucket, the roles and setting up your EC2 instance are covered in detail here. I’ll include the commands in this post for reference, and links to the specific section from before if you want more details.

Let’s get started.

Create a Java app

You can find a copy of the application we are building here. The code you will use is the combination of several other posts, with an emphasis on the Spring Boot Uploading Files, getting started guide.

To create your project skeleton, use the Spring Initializr project available at start.spring.io:

IAM Create Role

Keeping with the CLI theme of this post, here’s how to create the same project using the command line and HTTPie (my favorite http CLI tool for Mac):

$ mkdir java-s3
$ cd java-s3
$ http -j https://start.spring.io/starter.zip type==gradle-project \
         packageName==com.ripcitysoftware.aws   \
   dependencies==web,devtools  -o rcs-s3-project.zip
$ unzip rcs-s3-project.zip
$ rm rcs-s3-project.zip

Unlike the zip file created by the Spring Boot Initializr web page, the rcs-s3-project.zip file created above, does not have a root directory, when you unzip the file, all the contents will be placed in your current directory.

The zip file created by the web page above, will include a directory rcs-s3-project as part of the archive (this is defined in the Artifact text field).

Now that you have a Spring Boot, Java and Gradle application, you need to add the AWS SDK. Open the build.gradle (using your editor/IDE of choice), navigate to the dependencies section, and add the AWS SDK (line 3 below):

// the dependencies section should be around line 27 in the source file.

dependencies {
  implementation('org.springframework.boot:spring-boot-starter-web')
  implementation('com.amazonaws:aws-java-sdk:1.11.336')
  runtimeOnly('org.springframework.boot:spring-boot-devtools')
  testImplementation('org.springframework.boot:spring-boot-starter-test')
}

Create Storage Service

The first class to create is S3StorageService. This class will encapsulate all interaction with S3 in your Java application.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
package com.ripcitysoftware.aws; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.ObjectListing; import com.amazonaws.services.s3.model.S3ObjectSummary; import java.util.List; import java.util.stream.Collectors; public class S3StorageService { AmazonS3 s3; public void init() { s3 = AmazonS3ClientBuilder.standard().build(); (1) } public List<String> listFiles() { List<S3ObjectSummary> summaries = null; String bucketName = "ripcitysoftware"; try { ObjectListing objectListing = s3.listObjects(bucketName); summaries = objectListing.getObjectSummaries(); } catch(Exception e) { throw new StorageException("Failed to list objects for buckert",e); } return summaries.stream().map(S3ObjectSummary::getKey).collect(Collectors.toList()); } }
1 This will cause the AWS SDK to try all S3 authentication methods; this is exactly what you want. In your local development environment, you can use the SDK credentials. When running on an EC2 instance, the code will use an instance profile.

Create Web Service endpoint

Next, create a controller for your web service endpoint named FileUploadController.java. This controller will use the S3StorageService class you just created.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
package com.ripcitysoftware.aws; import java.io.IOException; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.ResponseEntity; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; @RestController public class FileUploadController { private S3StorageService storageService; @Autowired public FileUploadController(S3StorageService storageService) { this.storageService = storageService; } @GetMapping("/") @ResponseBody public ResponseEntity<Model> listUploadedFiles(Model model) throws IOException { model.addAttribute("files", storageService.listObjects()); (1) return ResponseEntity.ok(model); } }
1 Invoke the listObjects() method to fetch all objects in bucket ripcitysoftware

Update Application Class

To bring everything together, you will need to update the DemoApplication class created for you by Spring Initializr to create the S3StoreService bean:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
package com.ripcitysoftware.aws; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Bean S3StorageService createStorageService() { S3StorageService storageService = new S3StorageService(); storageService.init(); return storageService; } }

Here is the StorageException class, which is used in S3StoreageService:

1 2 3 4 5 6 7 8
package com.ripcitysoftware.aws; public class StorageException extends RuntimeException { public StorageException(String message, Throwable cause) { super(message, cause); } }

Create S3 Bucket

Before I get ahead of myself, you’ll need an S3 bucket to test your application. If you followed the last post you’ll already have an S3 bucket. If not, you’ll need to install and configure your AWS CLI, you can find instructions here to install it and here to configure it). With the AWS CLI installed and ready to go, create the S3 Bucket:

S3 Bucket names need to be unique DNS-compliant names. You many need to add a digit or some other character(s) to the end of the bucket name ripcitysoftware, or use a different name; you many have to try a few different names before you fine a unique one.

1 2 3 4 5 6 7 8 9 10 11 12 13 14
# create an S3 bucket $ aws s3api create-bucket --bucket ripcitysoftware --acl private \ --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2 { "Location": "http://ripcitysoftware.s3.amazonaws.com/" } $ # create a local file $ touch test-file $ # copy the empty file to S3 $ aws s3 cp test-file s3://ripcitysoftware upload: ./test-file to s3://ripcitysoftware/test-file $

Test Application locally

Now you are ready to run the service in your local dev env:

 $ gradle bootRun

> Task :bootRun

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2018-12-16 13:26:44.606  INFO 23989 --- [  restartedMain] com.ripcitysoftware.aws.DemoApplication  : Starting DemoApplication on macinfiityi9mbp.lan with PID 23989 (/Users/chrismaki/dev/rcs/blog-posts/s3-java/build/classes/java/main started by chrismaki in /Users/chrismaki/dev/rcs/blog-posts/s3-java)
...
2018-12-16 13:26:46.356  INFO 23989 --- [  restartedMain] com.ripcitysoftware.aws.DemoApplication  : Started DemoApplication in 2.085 seconds (JVM running for 2.537)
<=========----> 75% EXECUTING [20s]
> :bootRun

In another terminal, access your local Spring Boot application:

1 2 3 4 5 6 7 8 9 10 11
$ http :8080/ HTTP/1.1 200 Content-Type: application/json;charset=UTF-8 Date: Sun, 16 Dec 2018 21:29:20 GMT Transfer-Encoding: chunked { "files": [ "test-file" ] }

You’ve tested our application, it’s working as expected, you are ready to move on to the AWS tasks. At this point you can stop your Java application.

Where you are so far:

  • Create a simple Java application

  • Create an S3 bucket

  • Create a customer managed policy

  • Create an IAM role

  • Add a bucket policy to your S3 bucket

  • Create an EC2 instance

  • Run your Java app to verify access

Create AWS Resources

This section is described in more detail here. So you don’t have to reference multiple posts at this point, all the commands you’ll need are below.

S3 and IAM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# Create the Policy Document $ cat << EOF > ec2-policy-document.json { "Version": "2012-10-17", "Statement": [{ "Action": "sts:AssumeRole", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" } }] } EOF # Using the above json file, create the role $ aws iam create-role --role-name rcs-s3-crud-role \ --assume-role-policy-document file://ec2-policy-document.json
  • Create an IAM Policy

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
# Create the IAM Policy json file $ cat << EOF > rcs-crud-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListObject", "s3:ListBucket", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::ripcitysoftware", "arn:aws:s3:::ripcitysoftware/*" ] } ] } EOF # Create an IAM Policy using the above file $ aws iam create-policy --policy-name rcs-crud-policy \ --policy-document file://rcs-crud-policy.json
  • Attach the policy to the role

1 2 3 4 5 6
# get your account number, you'll need it for the next too $ aws sts get-caller-identity --output text --query 'Account' 123456789012 # replace 123456789012 with your account number $ aws iam attach-role-policy --role-name rcs-s3-crud-role \ --policy-arn arn:aws:iam::123456789012:policy/rcs-crud-policy

Where you are so far:

  • Create a simple Java application

  • Create an S3 bucket

  • Create a customer managed policy

  • Create an IAM role

  • Add a bucket policy to your S3 bucket

  • Create an EC2 instance

  • Run your Java app to verify access

Create the S3 Bucket Policy

The last set of commands to run will create an S3 bucket policy to restrict which roles can access the contents. There are a lot of commands here, for a detailed discussion see here.

  • Get the Role ID.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
$ aws iam get-role --role-name rcs-s3-crud-role { "Role": { "Description": "Allows EC2 instances to call AWS services on your behalf.", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" } } ] }, "MaxSessionDuration": 3600, "RoleId": "AROA_YOUR_ROLE_ID", <= YOUR ROLE ID "CreateDate": "2018-12-04T23:07:44Z", "RoleName": "rcs-s3-crud", "Path": "/", "Arn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/rcs-s3-crud" } } $
  • Get your AWS User Name ID.

1 2 3 4 5 6 7 8 9 10 11
$ aws iam get-user --user-name <yourUserName> { "User": { "UserName": "<yourUserName>", "PasswordLastUsed": "2018-12-04T20:32:33Z", "CreateDate": "2018-09-23T22:59:47Z", "UserId": "AIDA_YOUR_USER_ID", "Path": "/", "Arn": "arn:aws:iam::123456789012:user/youUserName" } }
  • Create the bucket policy.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
$ cat << EOF > rcs-bucket-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::ripcitysoftware", "arn:aws:s3:::ripcitysoftware/*" ], "Condition": { "StringNotLike": { "aws:userId": [ "AROA_YOUR_ROLE_ID:*", "AIDA_YOUR_USER_ID", "YOUR_ACCOUNT_ID" ] } } } ] } EOF
  • Attach the policy to the bucket.

Hello

$ aws s3api put-bucket-policy --bucket ripcitysoftware --policy file://rcs-bucket-policy.json

Where you are so far:

  • Create a simple Java application

  • Create an S3 bucket

  • Create a customer managed policy

  • Create an IAM role

  • Add a bucket policy to your S3 bucket

  • Create an EC2 instance

  • Run your Java app to verify access

EC2 instance, with no Role

The application running on ec2 needs Java. To make sure Java is installed on the new ec2 instance, use User Data to install it. Create a file named launch_script.txt with the contents below:

1 2 3 4 5 6
$ cat << EOF > launch_script.txt #!/bin/bash yum update -y yum -y install java EOF

With the launch_script.txt file in hand, create a new ec2 instance with Java installed and ready to go:

1 2 3 4 5 6 7 8 9 10 11 12
$ aws ec2 run-instances --image-id ami-01bbe152bf19d0289 --count 1 \ --instance-type t2.nano --key-name <YOUR_KEY> \ --security-groups <YOUR_SECURITY_GROUP> \ --user-data file://launch_script.txt { "Instances": [ { "Monitoring": { "State": "disabled" }, ... }

Once the instance is up and running:

  1. Create an executable jar.

  2. Copy the executable jar to your ec2 instance.

  3. ssh to the ec2 instance.

  4. Run the application java application on the ec2 instance:

$ gradle build

> Task :test

BUILD SUCCESSFUL in 6s
5 actionable tasks: 3 executed, 2 up-to-date
$
$ scp -i ~/.ssh/YOUR_KEY build/libs/rcs-s3-0.0.1-SNAPSHOT ec2-user@XX.XX.XX.XX:
$
$ ssh -i ~/.ssh/YOUR_KEY ec2-user@XX.XX.XX.XX
$
# on the ec2 instance, run your application
$ java -jar rcs-s3-0.0.1-SNAPSHOT

.   ____          _            __ _ _
/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/  ___)| |_)| | | | | || (_| |  ) ) ) )
'  |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot ::        (v2.1.1.RELEASE)

2018-12-14 20:46:41.094  INFO 15911 --- [           main] com.ripcitysoftware.aws.DemoApplication  : Starting DemoApplication on ip-xx-xx-xx-xx.us-west-2.compute.internal with PID 15911 (/home/ec2-user/rcs-s3-0.0.1-SNAPSHOT.jar started by ec2-user in /home/ec2-user)
...

Make sure your terminal is visible so you can see the output generated by the application. Since this ec2 instance does not have the rcs-s3-crud-role, the application will fail when you try to list all objects in the bucket or upload a file. You can test this by calling one of the endpoints in the application.

The quickest way to test your application is to call the slash (http://xx.xx.xx.xx:8080/) endpoint (make sure the security group associated with your ec2 instance also allows port 8080 access from your local computer):

1 2 3 4 5 6 7
# make sure both terminal sessions are visible so you can see the output $ curl -i http://xx.xx.xx.xx:8080/ HTTP/1.1 500 Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Fri, 14 Dec 2018 20:51:01 GMT Connection: close

Next, attach the rcs-s3-crud-profile to the ec2 instance and try the endpoint again:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
$ aws ec2 describe-instances --filters Name=image-id,Values=ami-01bbe152bf19d0289 \ --instance-ids | grep InstanceId "InstanceId": "i-02a2c9860a308c59a", $ aws ec2 associate-iam-instance-profile --instance-id i-02a2c9860a308c59a \ --iam-instance-profile Name=rcs-s3-crud-profile # # now run the curl command again $ curl -i http://xx.xx.xx.xx:8080/ HTTP/1.1 200 Content-Type: application/json;charset=UTF-8 Date: Wed, 12 Dec 2018 19:38:39 GMT Transfer-Encoding: chunked { "files": [ "test-file" ] }

Where you are so far:

  • Create a simple Java application

  • Create an S3 bucket

  • Create a customer managed policy

  • Create an IAM role

  • Add a bucket policy to your S3 bucket

  • Create an EC2 instance

  • Run your Java app to verify access

Conclusion

Wow, that was a lot of work. Everything you did here, from the command line, can be done from the AWS Console. In addition, you can do all of this using CloudFormation or another "infrastructure as code" tool. In future posts, we’ll show you how to use CloudFormation instead of the manual command line tools.

As mentioned before, not needing to store passwords locally is a great way to secure your applications and infrastructure. As a developer, understanding how AWS IAM Roles work enable you to create better, more secure applications.

Are you using IAM Roles to protect your resources? If so, I’d love to hear about what you are doing, please leave a comment below.

One last thing - Add/Remove ec2 instance policy

If you want to try adding and removing the role from the instance, you can use the disassociate-iam-instance-profile command (you first need to run the describe-iam-instance-profile-associations to get the AssociationId):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
$ aws ec2 describe-iam-instance-profile-associations | grep -A2 i-02a2c9860a308c59a "InstanceId": "i-02a2c9860a308c59a", "State": "associated", "AssociationId": "iip-assoc-07aa2b2e3b0d635a2", ... # use the ipp ID from above to remove the role $ aws ec2 disassociate-iam-instance-profile --association-id iip-assoc-07aa2b2e3b0d635a2 { "IamInstanceProfileAssociation": { "InstanceId": "i-02a2c9860a308c59a", "State": "disassociating", ... # confirm the role has been removed $ aws ec2 describe-iam-instance-profile-associations | grep -A2 i-02a2c9860a308c59a # test the Java application $ curl -i http://xx.xx.xx.xx:8080/ HTTP/1.1 200 Content-Type: application/json;charset=UTF-8 Date: Fri, 14 Dec 2018 22:47:43 GMT Transfer-Encoding: chunked # why didn't it fail? see below $

What just happened? I thought if you removed the role the Java application would fail. Last time, when you changed the role, the update was instantaneous. The Java application cached the temporary credentials the role provided. If you stop and restart your Java application, you’ll see the endpoint stops working.

Updates

  1. 2/15/19 - updated all formatting with move to full hugo site

comments powered by Disqus