Tag Archives: AWS IoT

How to enforce multi-party approval for creating Matter-compliant certificate authorities

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/how-to-enforce-multi-party-approval-for-creating-matter-compliant-certificate-authorities/

Customers who build smart home devices using the Matter protocol from the Connectivity Standards Alliance (CSA) need to create and maintain digital certificates, called device attestation certificates (DACs), to allow their devices to interoperate with devices from other vendors. DACs must be issued by a Matter device attestation certificate authority (CA). The CSA mandates multi-party approval for creating Matter compliant CAs. You can use AWS Private CA to create your Matter device attestation CA, which will store two important certificates: the product attestation authority (PAA) and product attestation intermediate (PAI) certificate. The PAA is the root CA that signs the PAI; the PAI is the intermediate CA that issues DACs. In this blog post, we will show how to implement multi-party approval for creation of these two certificates within AWS Private CA.

The CSA allows the use of delegated service providers (DSP) to provide you with public key infrastructure (PKI) services to create your Matter device attestation CA. You can use AWS Private CA as a DSP to create a Matter device attestation CA to issue DACs.

You should carefully plan to implement and demonstrate compliance with the Matter PKI Certificate Policy (CP) requirements when you issue Matter certificates by using the CA infrastructure services provided by AWS Private CA. Matter PKI CP is not just a technical standard; it also covers people, processes, and technology. For information about the requirements to comply with the Matter PKI CP and a reference list of acronyms, see the Matter PKI Compliance Guide. In this blog post, we address one of the Matter requirements for technical security controls for implementing multi-party approval for the creation of PAA and PAI certificates

Note: The solution presented in this post uses AWS Systems Manager Change Manager, a capability of AWS Systems Manager, for demonstrating multi-party approval as required by the Matter CP for the creation of the PAA and PAI. Additionally, the solution also uses AWS Systems Manager documents (SSM documents), which contain the code to automate the creation of PAA and PAI DAC certificates.

Implementing multi-party approval: Personas and IAM roles

For the process of achieving the multi-party approval required for Matter compliance, we will use the following human personas:

  • Jane Doe and Paulo Santos as the two approvers responsible for signing off on the creation of PAA and PAI.
  • Shirley Rodriguez as the persona responsible for setting up the prerequisite infrastructure and creating the change template that governs the multi-party approval process and specifying the human personas who are authorized to approve change requests.
  • Richard Roe as the persona responsible for reviewing and approving change template changes made by Shirley Rodriguez, to verify the separation of duties.

AWS offers support for identity federation to enable federated single sign-on (SSO). This allows users to sign into the AWS Management Console or call AWS API operations by using the credentials of an IAM role. To establish a secure authentication and authorization model, we highly recommend that you map the identities of the human personas to IAM roles.

As a prerequisite, Shirley Rodriguez will create the following AWS Identity and Access Management (IAM) roles that support the multi-party approval operations:

  • TmpltReview-Role — Richard Roe will assume this role to review and approve changes to the change template that is used to run the SSM document to create the matter CAs.
  • CreatePAA-Role and CreatePAI-Role — Clone the solution GitHub repository and create the roles by using the policies from the repository:
    • CreatePAA-Role — This role is assumed by the AWS Systems Manager service to create the PAA.
    • CreatePAI-Role — This role is assumed by the AWS Systems Manager service to create the PAI.
  • MatterCA-Admin-1 and MatterCA-Admin-2 — Jane Doe will use the MatterCA-Admin-1 role, while Paulo Santos will use the MatterCA-Admin-2 role. These individuals will serve as the two approvers for the multi-party approval process.

Note: It’s important that one person cannot approve an action by themselves. If a person is allowed to assume the MatterCA-Admin-1 role, they must not be allowed to assume the MatterCA-Admin-2 role also. If the same person can assume both roles, then that person can bypass the requirement for two different people to approve.

To create the IAM roles

  1. Create IAM roles MatterCA-Admin-1 and MatterCA-Admin-2, and attach the following AWS-managed policies:
  2. You should configure the trust relationship to allow Jane Doe to use the Matter-CA-Admin-1-Role and Paulo Santos to use the Matter-CA-Admin-2-Role for the multi-party approval process. This is intended to restrict Jane Doe and Paulo Santos from assuming each other’s roles. Use the following policy as a guide, and make sure to replace <AccountNumber> and <Role_Name> with your own information, depending on the federated identity models that you have chosen.
    {
    
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "aws:PrincipalARN":"arn:aws:iam::<AccountNumber>:role/<Role-Name>"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
  3. Create the IAM role TmpltReview-Role, and attach the following policies.
    • AmazonSSMReadOnlyAccess
    • Attach the following custom inline policy to enable review and approval of the change template.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "TemplateReviewer",
                "Effect": "Allow",
                "Action": "ssm:UpdateDocumentMetadata",
                "Resource": "*"
            }
        ]
    }
  4. Modify the trust relationship to allow only Richard Roe to use the role, as shown in the following policy. Make sure to replace <AccountNumber> and <Role-Name> with your own information.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS":"aws:PrincipalARN":"arn:aws:iam::<AccountNumber>:role/<Role-Name>"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
  5. Create the IAM role CreatePAA-Role, which will be used by the AWS Systems Manager change template to run the SSM document to create PAA.
    1. Attach the following inline policy to the CreatePAA-Role.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "acm-pca:ImportCertificateAuthorityCertificate",
                      "acm-pca:IssueCertificate",
                      "acm-pca:CreateCertificateAuthority",
                      "acm-pca:GetCertificate",
                      "acm-pca:GetCertificateAuthorityCsr",
                      "acm-pca:DescribeCertificateAuthority"
                  ],
                  "Resource": "*"
              }
          ]
      }
    2. Modify the trust relationship for CreatePAA-Role to allow only the AWS Systems Manager service to assume this role, as shown following.
      {
      
      
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "ssm.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole",
                  "Condition": {}
              }
          ]
  6. Create the IAM role CreatePAI-Role, which will be used by the change template to run the SSM document to create the PAI certificate.
    1. Attach the following policy as an inline policy on the CreatePAI-Role.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "acm-pca:ImportCertificateAuthorityCertificate",
                      "acm-pca:CreateCertificateAuthority",
                      "acm-pca:GetCertificateAuthorityCsr",
                      "acm-pca:DescribeCertificateAuthority"
                  ],
                  "Resource": "*"
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "acm-pca:GetCertificateAuthorityCertificate",
                      "acm-pca:GetCertificate",
                      "acm-pca:IssueCertificate"
                  ],
                  "Resource": “*”
              }
          ]
      }
  7. Modify the trust relationship for CreatePAI-Role to allow only AWS Systems Manager to assume this role, as shown following.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "ssm.amazonaws.com"
                },
                "Action": "sts:AssumeRole",
                "Condition": {}
            }
        ]
    }
    

Preventive security controls recommended for this solution

We recommend that you apply the following security controls for this solution:

  • Dedicate an AWS account to this solution – It is important that the only users who can perform actions on the PAA and PAI are the users in this account. By deploying these items in a dedicated AWS account, you limit the number of users who might have elevated privileges, but don’t have cause to use those privileges here.
  • SCPs (service control policies) – The IAM policies in this solution do not prevent someone with privileges, such as an administrator, from bypassing your expected controls and approving usage of the CA. SCPs, if they are applied by using AWS Organizations, can restrict the creation of CAs (certificate authorities) exclusively to CreatePAA-Role and CreatePAI-Role.

    The following example SCP will enforce this type of restriction. Make sure to replace <AccountNumber> with your own information. With a strong SCP, even the root account will not be able to perform these operations or change these security controls.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "RestrictCACreation",
                "Effect": "Deny",
                "Action": ["acm-pca:CreateCertificateAuthority"],
                "Resource": "*",
                "Condition": {
                    "StringNotLike": {
                        "aws:PrincipalARN": [
                            "arn:aws:iam::<AccountNumber>:role/CreatePAA-Role",
                            "arn:aws:iam::<AccountNumber>:role/CreatePAI-Role"
                        ]
                    }
                }
            }
        ]
     }

AWS Systems Manager configuration

Shirley Rodriguez will download the following sample Systems Manager (SSM) documents from our GitHub repository and perform the listed steps in this section.

The content in these yaml files will be used in the next steps to create SSM documents.

Create the SSM document

The first step is to create the SSM document that automates resource creation of the PAA and PAI in AWS Private CA.

To create the SSM document

  1. Open the Systems Manager console.
  2. In the left navigation pane, under Shared Resources, choose Documents.
  3. Choose the Owned by me tab, choose Create document, and from the dropdown list, choose Automation.
    Figure 1: Create the automation document

    Figure 1: Create the automation document

  4. Under Create automation, choose the Editor tab, and then choose Edit.
    Figure 2: Automation document editor

    Figure 2: Automation document editor

  5. Copy the sample automation code from the file CreatePAA.yaml that you downloaded from the GitHub repository, and paste it into the editor.
  6. For Name, enter CreatePAA, and then choose Create automation.
  7. To check that the CreatePAA document was created successfully, choose the Owned by me tab. You should see the Systems Manager document, as shown in Figure 3.
    Figure 3: Successful creation of the CreatePAA document

    Figure 3: Successful creation of the CreatePAA document

  8. Repeat the preceding steps for creating the PAI. Make sure that you paste the code from the file CreatePAI.yaml into the editor and enter the name CreatePAI to create the PAI CA.
  9. To check that the CreatePAI document was created successfully, choose the Owned by me tab. You should see the CreatePAI Systems Manager document, as shown in Figure 4.
    Figure 4: Successful creation of the PAA and PAI documents

    Figure 4: Successful creation of the PAA and PAI documents

You’ve now completed the creation of an SSM document that contains the automation code to create certificate authorities PAA and PAI. The next step is to create Change Manager templates, which will use the SSM document and apply multi-party approval before the creation of the PAA and PAI.

Create the Change Manager templates

Shirley Rodriguez will next create two change templates that run the SSM documents: one for the PAA and one for the PAI.

To create the change templates

  1. Open the Systems Manager console.
  2. In the left navigation pane, under Change Management, choose Change Manager.
  3. On the Change Manager page, in the top right, choose Create template.
  4. For Name, enter CreatePAATemplate.
  5. In the Builder section, add a description (optional), and for Runbook, search and select CreatePAA. Keep the defaults for the other selections.
    Figure 5: Select the runbook CreatePAA in the change template

    Figure 5: Select the runbook CreatePAA in the change template

  6. Scroll down to the Change request approvals section and choose Add approval level. This is where you configure multi-party approval for the change template.
  7. Because there are two approvers, for Number of approvers required at this level, choose 1 from the dropdown.
  8. Choose Add approver, choose Template specified approvers, and then select the MatterCA-Admin-1. Then choose Add another approval level for the second approver.
    Figure 6: Add first level approver for the template

    Figure 6: Add first level approver for the template

  9. Choose Template specified approvers, and then select the MatterCA-Admin-2 role for multi-party approval. These roles can now approve the change request.
    Figure 7: Add second level approver for the template.

    Figure 7: Add second level approver for the template.

  10. Keep the defaults for the rest of the options, and at the bottom of the screen, choose Save and preview.
  11. On the preview screen, review the configurations, and then on the top right, choose Submit for review. This pushes the template to be reviewed by template reviewer Richard Roe. In the Templates tab, the template status shows as Pending review.
    Figure 8: Template with a status of pending review

    Figure 8: Template with a status of pending review

  12. Repeat the preceding steps to create the PAI change template. Make sure to name it CreatePAITemplate, and at step 5, for Runbook, select the CreatePAI document.
    Figure 9: Both templates ready for review

    Figure 9: Both templates ready for review

You’ve successfully created two change templates, CreatePAATemplate and CreatePAITemplate, that generate a change request that contains an SSM document with automation code for building the PAA and PAI. These change requests are configured with multi-party approval before running the SSM document. However, before you can proceed with running the change template, it must undergo review and approval by the template reviewer Richard Roe.

Review and approve the Change Manager templates

First you need to make sure that TmpltReview-Role is added as a reviewer and approver of change templates. Shirley Rodriguez will follow the steps in this section to add TmpltReview-Role as change template reviewer.

To add the change template reviewer

  1. Follow the instructions in the Systems Manager documentation to configure the IAM role TmpltReview-Role to review and approve the change template. Figure 10 shows how this setup looks in the Systems Manager console.
    Figure 10: The template reviewer role in Settings

    Figure 10: The template reviewer role in Settings

    Now you have TmpltReview-Role added as a reviewer. Change templates that are created or modified will now need to be reviewed and approved by this role. Richard Roe will use the role TmpltReview-Role for governance of change templates, to make sure changes made by Shirley Rodriguez are in alignment with the organization’s compliance needs for Matter.

  2. Richard Roe will follow the steps in the Systems Manager documentation for reviewing and approving change templates, to approve CreatePAATemplate and CreatePAITemplate. After the change template is approved, its status changes to Approved, and it’s ready to be run.
    Figure 11: Change template approval details

    Figure 11: Change template approval details

You now have the change templates CreatePAATemplate and CreatePAITemplate in approved status.

Create the PAA and PAI with multi-party approval for Matter compliance

Up to this point, these instructions have described one-time configurations of AWS Systems Manager to set up the IAM roles, SSM documents, and change templates that are required to enforce multi-party approval. Now you are ready to use these change templates to create the PAA and PAI and perform multi-party approval.

Shirley Rodriguez will generate change requests that require approval from Jane Doe and Paulo Santos. This manual approval process will then run the SSM documents to create the PAA and PAI.

Create a change request for the PAA

Perform the following steps to create a change request for the PAA.

To create a change request for the PAA

  1. Open the Systems Manager console.
  2. In the left navigation pane, choose Change Manager, and then choose Create request.
  3. Search for and select CreatePAATemplate, and then choose Next.
  4. For Name, enter the name CreatePAA_ChangeRequest.
  5. (Optional) For Change request information, provide additional information about the change request.
  6. For Workflow start time, choose Run the operation as soon as possible after approval to run the change immediately after the request is approved.
  7. For Change request approvals, validate that the list of First-level approvals includes the change request approvers MatterCA-Admin-1 and MatterCA-Admin-2, which you configured previously in the section Create Change Manager template. Then choose Next.
    Figure 12: Change request approvers

    Figure 12: Change request approvers

  8. For Automation assume role, select the IAM role CreatePAA_Role for creating the PAA.
    Figure 13: Change request role

    Figure 13: Change request role

  9. For Runbook parameters, enter the PAA certificate details for CommonName, Organization, VendorId, and ValidityInYears, and then choose Next.
  10. Review the change request content and then, at the bottom of the screen, choose Submit for approval. Optionally, you can set up an SNS topic to notify the approvers.

You have successfully created a change request that is currently awaiting approval from Jane Doe and Paulo Santos. Let’s now move on to the approval steps.

Multi-party approval: Approve the change request for the PAA

Each of the approvers should now follow the steps in this section for approval. Jane Doe will use the IAM role MatterCA-Admin-1, while Paulo Santos will need to use the IAM role MatterCA-Admin-2.

To approve the change request for the PAA

  1. Open the Systems Manager console and do the following.
    1. In the navigation pane, choose Change Manager.
    2. Choose the Approvals tab, select the CreatePAA change request, and then choose Approve.
    Figure 14: Change request approval

    Figure 14: Change request approval

    After Jane Doe and Paulo Santos each follow these steps to approve the change request, the change request will run and will complete with status “Success,” and the PAA will be created in AWS Private CA.

  2. Check that the status of the change request is Success, as shown in Figure 15.
    Figure 15: The change request ran successfully

    Figure 15: The change request ran successfully

Validate that the PAA is created in AWS Private CA

Next, you need to validate that the PAA was created successfully and copy its Amazon Resource Name (ARN) to use for PAI creation.

To validate the creation of the PAA and retrieve its ARN

  1. In the AWS Private CA console, choose the PAA CA that you created in the previous step.
  2. Copy the ARN of the PAA root CA. You will use PAA CA ARN when you set up the PAI change request.
    Figure 16: ARN of the PAA root CA PAA

    Figure 16: ARN of the PAA root CA PAA

After successfully completing these steps, you have created the certificate authority PAA by using AWS Private CA with multi-party approval. You can now proceed to the next section, where we will demonstrate how to use this PAA to issue a CA for PAI.

Create a change request for the PAI

Perform the following steps to create a change request for the PAI.

Note: Creation of a valid PAA is a prerequisite for creating the PAI in the following steps.

To create a change request for the PAI

  1. After the PAA is created successfully, you can complete the creation of the PAI by repeating the same steps that you did in the Create a change request for the PAA section, but with the following changes:
    1. At step 3, make sure that you search for and select CreatePAITemplate.
      Figure 17: CreatePAITemplate template

      Figure 17: CreatePAITemplate template

    2. At step 4, for Name, enter CreatePAI_ChangeRequest.
    3. At step 8, for Automation assume role, select the IAM role CreatePAI_Role.
      Figure 18: Change request IAM role selection

      Figure 18: Change request IAM role selection

    4. At step 9, for Runbook parameters, enter the PAA CA ARN that you made note of earlier, along with the CommonName, Organization, VendorId, ProductId, and ValidityInYears for the PAI, and then choose Next.

    Multi-party approval: Approve the change request for the PAI

    Each of the approvers should now follow the steps in this section for approval for the PAI. Jane Doe will need to use IAM role MatterCA-Admin-1, and Paulo Santos will need to use IAM role MatterCA-Admin-2.

    To approve the change request for the PAI

    1. Open the Systems Manager console and do the following:
      1. In the navigation pane, choose Change Manager.
      2. Choose the Approvals tab, select the CreatePAI change request, and choose Approve.

      After both approvers (Jane Doe and Paulo Santos) approve the change request, the change request will run, and the PAA will be created inside AWS Private CA.

    2. Check that the status of the change request shows Success, as shown in Figure 19.
      Figure 19: The change requests for the PAA and PAI have run successfully

      Figure 19: The change requests for the PAA and PAI have run successfully

    3. In the AWS Private CA console, verify that the PAA and PAI have been created successfully, as shown in Figure 20.
      Figure 20: PAA and PAI in AWS Private CA

      Figure 20: PAA and PAI in AWS Private CA

    After successfully completing these steps, you have created the certificate authority PAI by using AWS Private CA with multi-party approval. You can now issue DAC certificates by using this PAI. Next, we will show how to validate the logs to confirm multi-party approval.

    Demonstrate compliance with multi-party approval requirements of the Matter CP by using the change manager timeline

    To keep audit records that show that multi-party approval was used to create the PAA and PAI to issue DACs, you can use the Change Manager timeline.

    To retrieve the change manager timeline report

    1. Open the Systems Manager console.
    2. In the left navigation pane, choose Change Manager.
    3. Choose Requests, and then select either the CreatePAA_ChangeRequest or the CreatePAI_ChangeRequest change request.
    4. Select the Timeline tab. Figure 21 shows an example of a timeline with complete runbook steps for PAA creation. It also shows the two approvers, Jane Doe and Paulo Santos, approving the change request. You can use this information to demonstrate multi-party approval.
      Figure 21: Audit trail for approval and creation of the PAA

      Figure 21: Audit trail for approval and creation of the PAA

      Likewise, Figure 22 shows an example of a timeline with complete runbook steps for creating the PAI by using multi-party approval.

      Figure 22: Audit trail for approval and creation of the PAI

      Figure 22: Audit trail for approval and creation of the PAI

    Conclusion

    In this post, you learned how to use AWS Private CA to facilitate the creation of Matter CAs in compliance with the Matter PKI CP. By using AWS Systems Manager, you can effectively fulfill the technical security control outlined in the Matter PKI CP for implementing multi-party approval for the creation of PAA and PAI certificates.

    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Private Certificate Authority re:Post or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Author photo: Ram Ramani

    Ram Ramani

    Ram is a Principal Security Solutions Architect at AWS with deep expertise in data protection and privacy. Ram is currently helping customers accelerate their Matter compliance needs using AWS services.

    Pravin Nair

    Pravin Nair

    Pravin is a seasoned Senior Security Solution Architect focused on data protection and privacy. Specializing in encryption, infrastructure security, and privacy, he assists customers in developing secure and scalable solutions that align with their business requirements. Pravin’s expertise helps to provide optimal data protection while addressing evolving security challenges.

    Lukas Rash

    Lukas Rash

    Lukas is a Software Engineer in the AWS Cryptography organization. He is passionate about building robust cloud services to help customers improve the security of their systems. He specializes in building software to help customers implement their public key infrastructures.

Running AWS Lambda functions on AWS Outposts using AWS IoT Greengrass

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/running-aws-lambda-functions-on-aws-outposts-using-aws-iot-greengrass/

This blog post is written by Adam Imeson, Sr. Hybrid Edge Specialist Solution Architect.

Today, AWS customers can deploy serverless applications in AWS Regions using a variety of AWS services. Customers can also use AWS Outposts to deploy fully managed AWS infrastructure at virtually any datacenter, colocation space, or on-premises facility.

AWS Outposts extends the cloud by bringing AWS services to customers’ premises to support their hybrid and edge workloads. This post will describe how to deploy Lambda functions on an Outpost using AWS IoT Greengrass.

Consider a customer who has built an application that runs in an AWS Region and depends on AWS Lambda. This customer has a business need to enter a new geographic market, but the nearest AWS Region is not close enough to meet application latency or data residency requirements. AWS Outposts can help this customer extend AWS infrastructure and services to their desired geographic region. This blog post will explain how a customer can move their Lambda-dependent application to an Outpost.

Overview

In this walkthrough you will create a Lambda function that can run on AWS IoT Greengrass and deploy it on an Outpost. This architecture results in an AWS-native Lambda function running on the Outpost.

Architecture overview - Lambda functions on AWS Outposts

Deploying Lambda functions on Outposts rack

Prerequisites: Building a VPC

To get started, build a VPC in the same Region as your Outpost. You can do this with the create VPC option in the AWS console. The workflow allows you to set up a VPC with public and private subnets, an internet gateway, and NAT gateways as necessary. Do not consume all of the available IP space in the VPC with your subnets in this step, because you will still need to create Outposts subnets after this.

Now, build a subnet on your Outpost. You can do this by selecting your Outpost in the Outposts console and choosing Create Subnet in the drop-down Actions menu in the top right.

Confirm subnet details

Choose the VPC you just created and select a CIDR range for your new subnet that doesn’t overlap with the other subnets that are already in the VPC. Once you’ve created the subnet, you need to create a new subnet route table and associate it with your new subnet. Go into the subnet route tables section of the VPC console and create a new route table. Associate the route table with your new subnet. Add a 0.0.0.0/0 route pointing at your VPC’s internet gateway. This sets the subnet up as a public subnet, which for the purposes of this post will make it easier to access the instance you are about to build for Greengrass Core. Depending on your requirements, it may make more sense to set up a private subnet on your Outpost instead. You can also add a route pointing at your Outpost’s local gateway here. Although you won’t be using the local gateway during this walkthrough, adding a route to the local gateway makes it possible to trigger your Outpost-hosted Lambda function with on-premises traffic.

Create a new route table

Associate the route table with the new subnet

Add a 0.0.0.0/0 route pointing at your VPC’s internet gateway

Setup: Launching an instance to run Greengrass Core

Create a new EC2 instance in your Outpost subnet. As long as your Outpost has capacity for your desired instance type, this operation will proceed the same way as any other EC2 instance launch. You can check your Outpost’s capacity in the Outposts console or in Amazon CloudWatch:

I used a c5.large instance running Amazon Linux 2 with 20 GiB of Amazon EBS storage for this walkthough. You can pick a different instance size or a different operating system in accordance with your application’s needs and the AWS IoT Greengrass documentation. For the purposes of this tutorial, we assign a public IP address to the EC2 instance on creation.

Step 1: Installing the AWS IoT Greengrass Core software

Once your EC2 instance is up and running, you will need to install the AWS IoT Greengrass Core software on the instance. Follow the AWS IoT Greengrass documentation to do this. You will need to do the following:

  1. Ensure that your EC2 instance has appropriate AWS permissions to make AWS API calls. You can do this by attaching an instance profile to the instance, or by providing AWS credentials directly to the instance as environment variables, as in the Greengrass documentation.
  2. Log in to your instance.
  3. Install OpenJDK 11. For Amazon Linux 2, you can use sudo amazon-linux-extras install java-openjdk11 to do this.
  4. Create the default system user and group that runs components on the device, with
    sudo useradd —system —create-home ggc_user
    sudo groupadd —system ggc_group
  5. Edit the /etc/sudoers file with sudo visudosuch that the entry for the root user looks like root ALL=(ALL:ALL) ALL
  6. Enable cgroups and enable and mount the memory and devices cgroups. In Amazon Linux 2, you can do this with the grubby utility as follows:
    sudo grubby --args="cgroup_enable=memory cgroup_memory=1 systemd.unified_cgroup_hierarchy=0" --update-kernel /boot/vmlinuz-$(uname -r)
  7. Type sudo reboot to reboot your instance with the cgroup boot parameters enabled.
  8. Log back in to your instance once it has rebooted.
  9. Use this command to download the AWS IoT Greengrass Core software to the instance:
    curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip
  10. Unzip the AWS IoT Greengrass Core software:
    unzip greengrass-nucleus-latest.zip -d GreengrassInstaller && rm greengrass-nucleus-latest.zip
  11. Run the following command to launch the installer. Replace each argument with appropriate values for your particular deployment, particularly the aws-region and thing-name arguments.
    sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE \
    -jar ./GreengrassInstaller/lib/Greengrass.jar \
    --aws-region region \
    --thing-name MyGreengrassCore \
    --thing-group-name MyGreengrassCoreGroup \
    --thing-policy-name GreengrassV2IoTThingPolicy \
    --tes-role-name GreengrassV2TokenExchangeRole \
    --tes-role-alias-name GreengrassCoreTokenExchangeRoleAlias \
    --component-default-user ggc_user:ggc_group \
    --provision true \
    --setup-system-service true \
    --deploy-dev-tools true
  12. You have now installed the AWS IoT Greengrass Core software on your EC2 instance. If you type sudo systemctl status greengrass.service then you should see output similar to this:

Step 2: Building and deploying a Lambda function

Now build a Lambda function and deploy it to the new Greengrass Core instance. You can find example local Lambda functions in the aws-greengrass-lambda-functions GitHub repository. This example will use the Hello World Python 3 function from that repo.

  1. Create the Lambda function. Go to the Lambda console, choose Create function, and select the Python 3.8 runtime:

  1. Choose Create function at the bottom of the page. Once your new function has been created, copy the code from the Hello World Python 3 example into your function:

  1. Choose Deploy to deploy your new function’s code.
  2. In the top right, choose Actions and select Publish new version. For this particular function, you would need to create a deployment package with the AWS IoT Greengrass SDK for the function to work on the device. I’ve omitted this step for brevity as it is not a main focus of this post. Please reference the Lambda documentation on deployment packages and the Python-specific deployment package docs if you want to pursue this option.

  1. Go to the AWS IoT Greengrass console and choose Components in the left-side pop-in menu.
  2. On the Components page, choose Create component, and then Import Lambda function. If you prefer to do this programmatically, see the relevant AWS IoT Greengrass documentation or AWS CloudFormation documentation.
  3. Choose your new Lambda function from the drop-down.

Create component

  1. Scroll to the bottom and choose Create component.
  2. Go to the Core devices menu in the left-side nav bar and select your Greengrass Core device. This is the Greengrass Core EC2 instance you set up earlier. Make a note of the core device’s name.

  1. Use the left-side nav bar to go to the Deployments menu. Choose Create to create a new deployment, which will place your Lambda function on your Outpost-hosted core device.
  2. Give the deployment a name and select Core device, providing the name of your core device. Choose Next.

  1. Select your Lambda function and choose Next.

  1. Choose Next again, on both the Configure components and Configure advanced settings On the last page, choose Deploy.

You should see a green message at the top of the screen indicating that your configuration is now being deployed.

Clean up

  1. Delete the Lambda function you created.
  2. Terminate the Greengrass Core EC2 instance.
  3. Delete the VPC.

Conclusion

Many customers use AWS Outposts to expand applications into new geographies. Some customers want to run Lambda-based applications on Outposts. This blog post shows how to use AWS IoT Greengrass to build Lambda functions which run locally on Outposts.

To learn more about Outposts, please contact your AWS representative and visit the Outposts homepage and documentation.

Implement OAuth 2.0 device grant flow by using Amazon Cognito and AWS Lambda

Post Syndicated from Jeff Lombardo original https://aws.amazon.com/blogs/security/implement-oauth-2-0-device-grant-flow-by-using-amazon-cognito-and-aws-lambda/

In this blog post, you’ll learn how to implement the OAuth 2.0 device authorization grant flow for Amazon Cognito by using AWS Lambda and Amazon DynamoDB.

When you implement the OAuth 2.0 authorization framework (RFC 6749) for internet-connected devices with limited input capabilities or that lack a user-friendly browser—such as wearables, smart assistants, video-streaming devices, smart-home automation, and health or medical devices—you should consider using the OAuth 2.0 device authorization grant (RFC 8628). This authorization flow makes it possible for the device user to review the authorization request on a secondary device, such as a smartphone, that has more advanced input and browser capabilities. By using this flow, you can work around the limits of the authorization code grant flow with Proof Key for Code Exchange (PKCE)-defined OpenID Connect Core specifications. This will help you to avoid scenarios such as:

  • Forcing end users to define a dedicated application password or use an on-screen keyboard with a remote control
  • Degrading the security posture of the end users by exposing their credentials to the client application or external observers

One common example of this type of scenario is a TV HDMI streaming device where, to be able to consume videos, the user must slowly select each letter of their user name and password with the remote control, which exposes these values to other people in the room during the operation.

Solution overview

The OAuth 2.0 device authorization grant (RFC 8628) is an IETF standard that enables Internet of Things (IoT) devices to initiate a unique transaction that authenticated end users can securely confirm through their native browsers. After the user authorizes the transaction, the solution will issue a delegated OAuth 2.0 access token that represents the end user to the requesting device through a back-channel call, as shown in Figure 1.
 

Figure 1: The device grant flow implemented in this solution

Figure 1: The device grant flow implemented in this solution

The workflow is as follows:

  1. An unauthenticated user requests service from the device.
  2. The device requests a pair of random codes (one for the device and one for the user) by authenticating with the client ID and client secret.
  3. The Lambda function creates an authorization request that stores the device code, user code, scope, and requestor’s client ID.
  4. The device provides the user code to the user.
  5. The user enters their user code on an authenticated web page to authorize the client application.
  6. The user is redirected to the Amazon Cognito user pool /authorize endpoint to request an authorization code.
  7. The user is returned to the Lambda function /callback endpoint with an authorization code.
  8. The Lambda function stores the authorization code in the authorization request.
  9. The device uses the device code to check the status of the authorization request regularly. And, after the authorization request is approved, the device uses the device code to retrieve a set of JSON web tokens from the Lambda function.
  10. In this case, the Lambda function impersonates the device to the Amazon Cognito user pool /token endpoint by using the authorization code that is stored in the authorization request, and returns the JSON web tokens to the device.

To achieve this flow, this blog post provides a solution that is composed of:

  • An AWS Lambda function with three additional endpoints:
    • The /token endpoint, which will handle client application requests such as generation of codes, the authorization request status check, and retrieval of the JSON web tokens.
    • The /device endpoint, which will handle user requests such as delivering the UI for approval or denial of the authorization request, or retrieving an authorization code.
    • The /callback endpoint, which will handle the reception of the authorization code associated with the user who is approving or denying the authorization request.
  • An Amazon Cognito user pool with:
  • Finally, an Amazon DynamoDB table to store the state of all the processed authorization requests.

Implement the solution

The implementation of this solution requires three steps:

  1. Define the public fully qualified domain name (FQDN) for the Application Load Balancer public endpoint and associate an X.509 certificate to the FQDN
  2. Deploy the provided AWS CloudFormation template
  3. Configure the DNS to point to the Application Load Balancer public endpoint for the public FQDN

Step 1: Choose a DNS name and create an SSL certificate

Your Lambda function endpoints must be publicly resolvable when they are exposed by the Application Load Balancer through an HTTPS/443 listener.

To configure the Application Load Balancer component

  1. Choose an FQDN in a DNS zone that you own.
  2. Associate an X.509 certificate and private key to the FQDN by doing one of the following:
  3. After you have the certificate in ACM, navigate to the Certificates page in the ACM console.
  4. Choose the right arrow (►) icon next to your certificate to show the certificate details.
     
    Figure 2: Locating the certificate in ACM

    Figure 2: Locating the certificate in ACM

  5. Copy the Amazon Resource Name (ARN) of the certificate and save it in a text file.
     
    Figure 3: Locating the certificate ARN in ACM

    Figure 3: Locating the certificate ARN in ACM

Step 2: Deploy the solution by using a CloudFormation template

To configure this solution, you’ll need to deploy the solution CloudFormation template.

Before you deploy the CloudFormation template, you can view it in its GitHub repository.

To deploy the CloudFormation template

  1. Choose the following Launch Stack button to launch a CloudFormation stack in your account.
    Select the Launch Stack button to launch the template

    Note: The stack will launch in the N. Virginia (us-east-1) Region. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template, modify it, and deploy it to the selected Region.

  2. During the stack configuration, provide the following information:
    • A name for the stack.
    • The ARN of the certificate that you created or imported in AWS Certificate Manager.
    • A valid email address that you own. The initial password for the Amazon Cognito test user will be sent to this address.
    • The FQDN that you chose earlier, and that is associated to the certificate that you created or imported in AWS Certificate Manager.
    Figure 4: Configure the CloudFormation stack

    Figure 4: Configure the CloudFormation stack

  3. After the stack is configured, choose Next, and then choose Next again. On the Review page, select the check box that authorizes CloudFormation to create AWS Identity and Access Management (IAM) resources for the stack.
     
    Figure 5: Authorize CloudFormation to create IAM resources

    Figure 5: Authorize CloudFormation to create IAM resources

  4. Choose Create stack to deploy the stack. The deployment will take several minutes. When the status says CREATE_COMPLETE, the deployment is complete.

Step 3: Finalize the configuration

After the stack is set up, you must finalize the configuration by creating a DNS CNAME entry in the DNS zone you own that points to the Application Load Balancer DNS name.

To create the DNS CNAME entry

  1. In the CloudFormation console, on the Stacks page, locate your stack and choose it.
     
    Figure 6: Locating the stack in CloudFormation

    Figure 6: Locating the stack in CloudFormation

  2. Choose the Outputs tab.
  3. Copy the value for the key ALBCNAMEForDNSConfiguration.
     
    Figure 7: The ALB CNAME output in CloudFormation

    Figure 7: The ALB CNAME output in CloudFormation

  4. Configure a CNAME DNS entry into your DNS hosted zone based on this value. For more information on how to create a CNAME entry to the Application Load Balancer in a DNS zone, see Creating records by using the Amazon Route 53 console.
  5. Note the other values in the Output tab, which you will use in the next section of this post.

    Output key Output value and function
    DeviceCognitoClientClientID The app client ID, to be used by the simulated device to interact with the authorization server
    DeviceCognitoClientClientSecret The app client secret, to be used by the simulated device to interact with the authorization server
    TestEndPointForDevice The HTTPS endpoint that the simulated device will use to make its requests
    TestEndPointForUser The HTTPS endpoint that the user will use to make their requests
    UserPassword The password for the Amazon Cognito test user
    UserUserName The user name for the Amazon Cognito test user

Evaluate the solution

Now that you’ve deployed and configured the solution, you can initiate the OAuth 2.0 device code grant flow.

Until you implement your own device logic, you can perform all of the device calls by using the curl library, a Postman client, or any HTTP request library or SDK that is available in the client application coding language.

All of the following device HTTPS requests are made with the assumption that the device is a private OAuth 2.0 client. Therefore, an HTTP Authorization Basic header will be present and formed with a base64-encoded Client ID:Client Secret value.

You can retrieve the URI of the endpoints, the client ID, and the client secret from the CloudFormation Output table for the deployed stack, as described in the previous section.

Initialize the flow from the client application

The solution in this blog post lets you decide how the user will ask the device to start the authorization request and how the user will be presented with the user code and URI in order to verify the request. However, you can emulate the device behavior by generating the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint with the appropriate HTTP Authorization header. The Authorization header is composed of:

  • The prefix Basic, describing the type of Authorization header
  • A space character as separator
  • The base64 encoding of the concatenation of:
    • The client ID
    • The colon character as a separator
    • The client secret
     POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE HTTP/1.1
     User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
     Host: <FQDN of the ALB protected Lambda function>
     Accept: */*
     Accept-Encoding: gzip, deflate
     Connection: Keep-Alive
     Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg
    

The following JSON message will be returned to the client application.

Server: awselb/2.0
Date: Tue, 06 Apr 2021 19:57:31 GMT
Content-Type: application/json
Content-Length: 33
Connection: keep-alive
cache-control: no-store
{
    "device_code": "APKAEIBAERJR2EXAMPLE",
    "user_code": "ANPAJ2UCCR6DPCEXAMPLE",
    "verification_uri": "https://<FQDN of the ALB protected Lambda function>/device",
    "verification_uri_complete":"https://<FQDN of the ALB protected Lambda function>/device?code=ANPAJ2UCCR6DPCEXAMPLE&authorize=true",
    "interval": <Echo of POLLING_INTERVAL environment variable>,
    "expires_in": <Echo of CODE_EXPIRATION environment variable>
}

Check the status of the authorization request from the client application

You can emulate the process where the client app regularly checks for the authorization request status by using the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint. The request should have the same HTTP Authorization header that was defined in the previous section.

POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE&device_code=APKAEIBAERJR2EXAMPLE&grant_type=urn:ietf:params:oauth:grant-type:device_code HTTP/1.1
 User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
 Host: <FQDN of the ALB protected Lambda function>
 Accept: */*
 Accept-Encoding: gzip, deflate
 Connection: Keep-Alive
 Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg

Until the authorization request is approved, the client application will receive an error message that includes the reason for the error: authorization_pending if the request is not yet authorized, slow_down if the polling is too frequent, or expired if the maximum lifetime of the code has been reached. The following example shows the authorization_pending error message.

HTTP/1.1 400 Bad Request
Server: awselb/2.0
Date: Tue, 06 Apr 2021 20:57:31 GMT
Content-Type: application/json
Content-Length: 33
Connection: keep-alive
cache-control: no-store
{
"error":"authorization_pending"
}

Approve the authorization request with the user code

Next, you can approve the authorization request with the user code. To act as the user, you need to open a browser and navigate to the verification_uri that was provided by the client application.

If you don’t have a session with the Amazon Cognito user pool, you will be required to sign in.

Note: Remember that the initial password was sent to the email address you provided when you deployed the CloudFormation stack.

If you used the initial password, you’ll be asked to change it. Make sure to respect the password policy when you set a new password. After you’re authenticated, you’ll be presented with an authorization page, as shown in Figure 8.
 

Figure 8: The user UI for approving or denying the authorization request

Figure 8: The user UI for approving or denying the authorization request

Fill in the user code that was provided by the client application, as in the previous step, and then choose Authorize.

When the operation is successful, you’ll see a message similar to the one in Figure 9.
 

Figure 9: The “Success” message when the authorization request has been approved

Figure 9: The “Success” message when the authorization request has been approved

Finalize the flow from the client app

After the request has been approved, you can emulate the final client app check for the authorization request status by using the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint. The request should have the same HTTP Authorization header that was defined in the previous section.

POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE&device_code=APKAEIBAERJR2EXAMPLE&grant_type=urn:ietf:params:oauth:grant-type:device_code HTTP/1.1
 User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
 Host: <FQDN of the ALB protected Lambda function>
 Accept: */*
 Accept-Encoding: gzip, deflate
 Connection: Keep-Alive
 Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg

The JSON web token set will then be returned to the client application, as follows.

HTTP/1.1 200 OK
Server: awselb/2.0
Date: Tue, 06 Apr 2021 21:41:50 GMT
Content-Type: application/json
Content-Length: 3501
Connection: keep-alive
cache-control: no-store
{
"access_token":"eyJrEXAMPLEHEADER2In0.eyJznvbEXAMPLEKEY6IjIcyJ9.eYEs-zaPdEXAMPLESIGCPltw",
"refresh_token":"eyJjdEXAMPLEHEADERifQ. AdBTvHIAPKAEIBAERJR2EXAMPLELq -co.pjEXAMPLESIGpw",
"expires_in":3600

The client application can now consume resources on behalf of the user, thanks to the access token, and can refresh the access token autonomously, thanks to the refresh token.

Going further with this solution

This project is delivered with a default configuration that can be extended to support additional security capabilities or to and adapted the experience to your end-users’ context.

Extending security capabilities

Through this solution, you can:

  • Use an AWS KMS key issued by AWS KMS to:
    • Encrypt the data in the database;
    • Protect the configuration in the Amazon Lambda function;
  • Use AWS Secret Manager to:
    • Securely store sensitive information like Cognito application client’s credentials;
    • Enforce Cognito application client’s credentials rotation;
  • Implement additional Amazon Lambda’s code to enforce data integrity on changes;
  • Activate AWS WAF WebACLs to protect your endpoints against attacks;

Customizing the end-user experience

The following table shows some of the variables you can work with.

Name Function Default value Type
CODE_EXPIRATION Represents the lifetime of the codes generated 1800 Seconds
DEVICE_CODE_FORMAT Represents the format for the device code #aA A string where:
# represents numbers
a lowercase letters
A uppercase letters
! special characters
DEVICE_CODE_LENGTH Represents the device code length 64 Number
POLLING_INTERVAL Represents the minimum time, in seconds, between two polling events from the client application 5 Seconds
USER_CODE_FORMAT Represents the format for the user code #B A string where:
# represents numbers
a lowercase letters
b lowercase letters that aren’t vowels
A uppercase letters
B uppercase letters that aren’t vowels
! special characters
USER_CODE_LENGTH Represents the user code length 8 Number
RESULT_TOKEN_SET Represents what should be returned in the token set to the client application ACCESS+REFRESH A string that includes only ID, ACCESS, and REFRESH values separated with a + symbol

To change the values of the Lambda function variables

  1. In the Lambda console, navigate to the Functions page.
  2. Select the DeviceGrant-token function.
     
    Figure 10: AWS Lambda console—Function selection

    Figure 10: AWS Lambda console—Function selection

  3. Choose the Configuration tab.
     
    Figure 11: AWS Lambda function—Configuration tab

    Figure 11: AWS Lambda function—Configuration tab

  4. Select the Environment variables tab, and then choose Edit to change the values for the variables.
     
    Figure 12: AWS Lambda Function—Environment variables tab

    Figure 12: AWS Lambda Function—Environment variables tab

  5. Generate new codes as the device and see how the experience changes based on how you’ve set the environment variables.

Conclusion

Although your business and security requirements can be more complex than the example shown in this post, this blog post will give you a good way to bootstrap your own implementation of the Device Grant Flow (RFC 8628) by using Amazon Cognito, AWS Lambda, and Amazon DynamoDB.

Your end users can now benefit from the same level of security and the same experience as they have when they enroll their identity in their mobile applications, including the following features:

  • Credentials will be provided through a full-featured application on the user’s mobile device or their computer
  • Credentials will be checked against the source of authority only
  • The authentication experience will match the typical authentication process chosen by the end user
  • Upon consent by the end user, IoT devices will be provided with end-user delegated dynamic credentials that are bound to the exact scope of tasks for that device

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or reach out through the post’s GitHub repository.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jeff Lombardo

Jeff is a solutions architect expert in IAM, Application Security, and Data Protection. Through 16 years as a security consultant for enterprises of all sizes and business verticals, he delivered innovative solutions with respect to standards and governance frameworks. Today at AWS, he helps organizations enforce best practices and defense in depth for secure cloud adoption.

Building a Controlled Environment Agriculture Platform

Post Syndicated from Ashu Joshi original https://aws.amazon.com/blogs/architecture/building-a-controlled-environment-agriculture-platform/

This post was co-written by Michael Wirig, Software Engineering Manager at Grōv Technologies.

A substantial percentage of the world’s habitable land is used for livestock farming for dairy and meat production. The dairy industry has leveraged technology to gain insights that have led to drastic improvements and are continuing to accelerate. A gallon of milk in 2017 involved 30% less water, 21% less land, a 19% smaller carbon footprint, and 20% less manure than it did in 2007 (US Dairy, 2019). By focusing on smarter water usage and sustainable land usage, livestock farming can grow to provide sustainable and nutrient-dense food for consumers and livestock alike.

Grōv Technologies (Grōv) has pioneered the Olympus Tower Farm, a fully automated Controlled Environment Agriculture (CEA) system. Unique amongst vertical farming startups, Grōv is growing cattle feed to improve that sustainable use of land for livestock farming while increasing the economic margins for dairy and beef producers.

The challenges of CEA

The set of growing conditions for a CEA is called a “recipe,” which is a combination of ingredients like temperature, humidity, light, carbon dioxide levels, and water. The optimal recipe is dynamic and is sensitive to its ingredients. Crops must be monitored in near-real time, and CEAs should be able to self-correct in order to maintain the recipe. To build a system with these capabilities requires answers to the following questions:

  • What parameters are needed to measure for indoor cattle feed production?
  • What sensors enable the accuracy and price trade-offs at scale?
  • Where do you place the sensors to ensure a consistent crop?
  • How do you correlate the data from sensors to the nutrient value?

To progress from a passively monitored system to a self-correcting, autonomous one, the CEA platform also needs to address:

  • How to maintain optimum crop conditions
  • How the system can learn and adapt to new seed varieties
  • How to communicate key business drivers such as yield and dry matter percentage

Grōv partnered with AWS Professional Services (AWS ProServe) to build a digital CEA platform addressing the challenges posed above.

Olympus Tower - Grov Technologies

Tower automation and edge platform

The Olympus Tower is instrumented for measuring recipe ingredients by combining the mechanical, electrical, and domain expertise of the Grōv team with the IoT edge and sensor expertise of the AWS ProServe team. The teams identified a primary set of features such as height, weight, and evenness of the growth to be measured at multiple stages within the Tower. Sensors were also added to measure secondary features such as water level, water pH, temperature, humidity, and carbon dioxide.

The teams designed and developed a purpose-built modular and industrial sensor station. Each sensor station has sensors for direct measurement of the features identified. The sensor stations are extended to support indirect measurement of features using a combination of Computer Vision and Machine Learning (CV/ML).

The trays with the growing cattle feed circulate through the Olympus Tower. A growth cycle starts on a tray with seeding, circulates through the tower over the cycle, and returns to the starting position to be harvested. The sensor station at the seeding location on the Olympus Tower tags each new growth cycle in a tray with a unique “Grow ID.” As trays pass by, each sensor station in the Tower collects the feature data. The firmware, jointly developed for the sensor station, uses AWS IoT SDK to stream the sensor data along with the Grow ID and metadata that’s specific to the sensor station. This information is sent every five minutes to an on-site edge gateway powered by AWS IoT Greengrass. Dedicated AWS Lambda functions manage the lifecycle of the Grow IDs and the sensor data processing on the edge.

The Grōv team developed AWS Greengrass Lambda functions running at the edge to ingest critical metrics from the operation automation software running the Olympus Towers. This information provides the ability to not just monitor the operational efficiency, but to provide the hooks to control the feedback loop.

The two sources of data were augmented with site-level data by installing sensor stations at the building level or site level to capture environmental data such as weather and energy consumption of the Towers.

All three sources of data are streamed to AWS IoT Greengrass and are processed by AWS Lambda functions. The edge software also fuses the data and correlates all categories of data together. This enables two major actions for the Grōv team – operational capability in real-time at the edge and enhanced data streamed into the cloud.

Grov Technologies - Architecture

Cloud pipeline/platform: analytics and visualization

As the data is streamed to AWS IoT Core via AWS IoT Greengrass. AWS IoT rules are used to route ingested data to store in Amazon Simple Sotrage Service (Amazon S3) and Amazon DynamoDB. The data pipeline also includes Amazon Kinesis Data Streams for batching and additional processing on the incoming data.

A ReactJS-based dashboard application is powered using Amazon API Gateway and AWS Lambda functions to report relevant metrics such as daily yield and machine uptime.

A data pipeline is deployed to analyze data using Amazon QuickSight. AWS Glue is used to create a dataset from the data stored in Amazon S3. Amazon Athena is used to query the dataset to make it available to Amazon QuickSight. This provides the extended Grōv tech team of research scientists the ability to perform a series of what-if analyses on the data coming in from the Tower Systems beyond what is available in the react-based dashboard.

Data pipeline - Grov Technologies

Completing the data-driven loop

Now that the data has been collected from all sources and stored it in a data lake architecture, the Grōv CEA platform established a strong foundation for harnessing the insights and delivering the customer outcomes using machine learning.

The integrated and fused data from the edge (sourced from the Olympus Tower instrumentation, Olympus automation software data, and site-level data) is co-related to the lab analysis performed by Grōv Research Center (GRC). Harvest samples are routinely collected and sent to the lab, which performs wet chemistry and microbiological analysis. Trays sent as samples to the lab are associated with the results of the analysis with the sensor data by corresponding Grow IDs. This serves as a mechanism for labeling and correlating the recipe data with the parameters used by dairy and beef producers – dry matter percentage, micro and macronutrients, and the presence of myco-toxins.

Grōv has chosen Amazon SageMaker to build a machine learning pipeline on its comprehensive data set, which will enable fine tuning the growing protocols in near real-time. Historical data collection unlocks machine learning use cases for future detection of anomalous sensors readings and sensor health monitoring, as well.

Because the solution is flexible, the Grōv team plans to integrate data from animal studies on their health and feed efficiency into the CEA platform. Machine learning on the data from animal studies will enhance the tuning of recipe ingredients that impact the animals’ health. This will give the farmer an unprecedented view of the impact of feed nutrition on the end product and consumer.

Conclusion

Grōv Technologies and AWS ProServe have built a strong foundation for an extensible and scalable architecture for a CEA platform that will nourish animals for better health and yield, produce healthier foods and to enable continued research into dairy production, rumination and animal health to empower sustainable farming practices.