Tag Archives: AWS

Take Command Summit: Take Breaches from Inevitable to Preventable on May 21

Post Syndicated from Rapid7 original https://blog.rapid7.com/2024/04/22/take-command-summit-take-breaches-from-inevitable-to-preventable-on-may-21/

Take Command Summit: Take Breaches from Inevitable to Preventable on May 21

Registration is now open for Take Command, a day-long virtual summit in partnership with AWS. You do not want to miss it. You’ll get new attack intelligence, insight into AI disruption, transparent MDR partnerships, and more.

In 2024, adversaries are using AI and new techniques, working in gangs with nation-state budgets. But it’s “inevitable” they’ll succeed? Really?

Before any talk of surrender, please join us at Take Command. We’ve packed the day with information and insights you can take back to your team and use immediately.

You’ll hear from Chief Scientist Raj Samani, our own CISO Jaya Baloo, global security leaders, hands-on practitioners, and Rapid7 Labs leaders like Christiaan Beek and Caitlin Condon. You’ll get a first look at new, emergent research, trends, and intelligence from the curators of Metasploit and our renowned open source communities.

You’ll leave with actionable strategies to safeguard against the newest ransomware, state-sponsored TTPs, and marquee vulnerabilities.

Can’t make the entire day? Check out the agenda, see what fits

The summit kicks off with back-to-back keynotes. First, “Know Your Adversary: Breaking Down the 2024 Attack Intelligence Report” and “The State of Security 2024.”

You’ll get an insider view of Rapid7’s MDR SOC. Sessions range from “Building Defenses Through AI” to “Unlocking Success: Strategies for Measuring Team Performance” to a big favorite “Before, During, & After Ransomware Attacks.” Though no one really talks about it, there’s a lengthy “before” period, and new, good things you can do to frustrate the bad guys.

Take Command will offer strategies on building cybersecurity culture (yes, it’s difficult with humans). And, of course, preparing for the Securities & Exchange Commission’s Cybersecurity Disclosure Rules. You’ll hear from Sabeen Malik, VP, Global Government Affairs and Public Policy, Kyra Ayo Caros Director, Corporate Securities & Compliance and Harley L. Geiger, Venable LLP.

Now, turning the tables on attackers is possible

Adversaries are inflicting $10 trillion in damage to the global economy every year , and the goal posts keep moving. As risks from cloud, IoT, AI and quantum computing proliferate and attacks get more frequent, SecOps have never been more stressed. And more in need of sophisticated guidance.

Mark your calendar for May 21. Get details here. You’ll be saving a lot more than the date.

Serverless IoT email capture, attachment processing, and distribution

Post Syndicated from Stacy Conant original https://aws.amazon.com/blogs/messaging-and-targeting/serverless-iot-email-capture-attachment-processing-and-distribution/

Many customers need to automate email notifications to a broad and diverse set of email recipients, sometimes from a sensor network with a variety of monitoring capabilities. Many sensor monitoring software products include an SMTP client to achieve this goal. However, managing email server infrastructure requires specialty expertise and operating an email server comes with additional cost and inherent risk of breach, spam, and storage management. Organizations also need to manage distribution of attachments, which could be large and potentially contain exploits or viruses. For IoT use cases, diagnostic data relevance quickly expires, necessitating retention policies to regularly delete content.

Solution Overview

This solution uses the Amazon Simple Email Service (SES) SMTP interface to receive SMTP client messages, and processes the message to replace an attachment with a pre-signed URL in the resulting email to its intended recipients. Attachments are stored separately in an Amazon Simple Storage Service (S3) bucket with a lifecycle policy implemented. This reduces the storage requirements of recipient email server receiving notification emails. Additionally, this solution leverages built-in anti-spam and security scanning capabilities to deal with spam and potentially malicious attachments while at the same time providing the mechanism by which pre-signed attachment links can be revoked should the emails be distributed to unintended recipients.

The solution uses:

  • Amazon SES SMTP interface to receive incoming emails.
  • Amazon SES receipt rule on a (sub)domain controlled by administrators, to store raw incoming emails in an Amazon S3 bucket.
  • AWS Lambda function, triggered on S3 ObjectCreated event, to process raw emails, extract attachments, replace each with pre-signed URL with configurable expiry, and send the processed emails to intended recipients.

Solution Flow Details:

  1. SMTP client transmits email content to an email address in a (sub) domain with MX record set to Amazon SES service’s regional endpoint.
  2. Amazon SES SMTP interface receives an email and forwards it to SES Receipt Rule(s) for processing.
  3. A matching Amazon SES Receipt Rule saves incoming email into an Amazon S3 Bucket.
  4. Amazon S3 Bucket emits an S3 ObjectCreated Event, and places the event onto the Amazon Simple Queue Services (SQS) queue.
  5. The AWS Lambda service polls the inbound messages’ SQS queue and feeds events to the Lambda function.
  6. The Lambda function, retrieves email files from the S3 bucket, parses the email sender/subject/body, saves attachments to a separate attachment S3 bucket (7), and replaces attachments with pre-signed URLs in the email body. The Lambda function then extracts intended recipient addresses from the email body. If the body contains properly formatted recipients list, email is then sent using SES API (9), otherwise a notice is posted to a fallback Amazon Simple Notification Service (SNS) Topic (8).
  7. The Lambda function saves extracted attachments, if any, into an attachments bucket.
  8. Malformed email notifications are posted to a fallback Amazon SNS Topic.
  9. The Lambda function invokes Amazon SES API to send the processed email to all intended recipient addresses.
  10. If the Lambda function is unable to process email successfully, the inbound message is placed on to the SQS dead-letter queue (DLQ) queue for later intervention by the operator.
  11. SES delivers an email to each recipients’ mail server.
  12. Intended recipients download emails from their corporate mail servers and retrieve attachments from the S3 pre-signed URL(s) embedded in the email body.
  13. An alarm is triggered and a notification is published to Amazon SNS Alarms Topic whenever:
    • More than 50 failed messages are in the DLQ.
    • Oldest message on incoming SQS queue is older than 3 minutes – unable to keep up with inbound messages (flooding).
    • The incoming SQS queue contains over 180 messages (configurable) over 5 minutes old.

Setting up Amazon SES

For this solution you will need an email account where you can receive emails. You’ll also need a (sub)domain for which you control the mail exchanger (MX) record. You can obtain your (sub)domain either from Amazon Route53 or another domain hosting provider.

Verify the sender email address

You’ll need to follow the instructions to Verify an email address for all identities that you use as “From”, “Source”, ” Sender”, or “Return-Path” addresses. You’ll also need to follow these instructions for any identities you wish to send emails to during initial testing while your SES account is in the “Sandbox” (see next “Moving out of the SES Sandbox” section).

Moving out of the SES Sandbox

Amazon SES accounts are “in the Sandbox” by default, limiting email sending only to verified identities. AWS does this to prevent fraud and abuse as well as protecting your reputation as an email sender. When your account leaves the Sandbox, SES can send email to any recipient, regardless of whether the recipient’s address or domain is verified by SES. However, you still have to verify all identities that you use as “From”, “Source”, “Sender”, or “Return-Path” addresses.
Follow the Moving out of the SES Sandbox instructions in the SES Developer Guide. Approval is usually within 24 hours.

Set up the SES SMTP interface

Follow the workshop lab instructions to set up email sending from your SMTP client using the SES SMTP interface. Once you’ve completed this step, your SMTP client can open authenticated sessions with the SES SMTP interface and send emails. The workshop will guide you through the following steps:

  1. Create SMTP credentials for your SES account.
    • IMPORTANT: Never share SMTP credentials with unauthorized individuals. Anyone with these credentials can send as many SMTP requests and in whatever format/content they choose. This may result in end-users receiving emails with malicious content, administrative/operations overload, and unbounded AWS charges.
  2. Test your connection to ensure you can send emails.
  3. Authenticate using the SMTP credentials generated in step 1 and then send a test email from an SMTP client.

Verify your email domain and bounce notifications with Amazon SES

In order to replace email attachments with a pre-signed URL and other application logic, you’ll need to set up SES to receive emails on a domain or subdomain you control.

  1. Verify the domain that you want to use for receiving emails.
  2. Publish a mail exchanger record (MX record) and include the Amazon SES inbound receiving endpoint for your AWS region ( e.g. inbound-smtp.us-east-1.amazonaws.com for US East Northern Virginia) in the domain DNS configuration.
  3. Amazon SES automatically manages the bounce notifications whenever recipient email is not deliverable. Follow the Set up notifications for bounces and complaints guide to setup bounce notifications.

Deploying the solution

The solution is implemented using AWS CDK with Python. First clone the solution repository to your local machine or Cloud9 development environment. Then deploy the solution by entering the following commands into your terminal:

python -m venv .venv
. ./venv/bin/activate
pip install -r requirements.txt

cdk deploy \
--context SenderEmail=<verified sender email> \
 --context RecipientEmail=<recipient email address> \
 --context ConfigurationSetName=<configuration set name>

Note:

The RecipientEmail CDK context parameter in the cdk deploy command above can be any email address in the domain you verified as part of the Verify the domain step. In other words, if the verified domain is acme-corp.com, then the emails can be [email protected], [email protected], etc.

The ConfigurationSetName CDK context can be obtained by navigating to Identities in Amazon SES console, selecting the verified domain (same as above), switching to “Configuration set” tab and selecting name of the “Default configuration set”

After deploying the solution, please, navigate to Amazon SES Email receiving in AWS console, edit the rule set and set it to Active.

Testing the solution end-to-end

Create a small file and generate a base64 encoding so that you can attach it to an SMTP message:

echo content >> demo.txt
cat demo.txt | base64 > demo64.txt
cat demo64.txt

Install openssl (which includes an SMTP client capability) using the following command:

sudo yum install openssl

Now run the SMTP client (openssl is used for the proof of concept, be sure to complete the steps in the workshop lab instructions first):

openssl s_client -crlf -quiet -starttls smtp -connect email-smtp.<aws-region>.amazonaws.com:587

and feed in the commands (replacing the brackets [] and everything between them) to send the SMTP message with the attachment you created.

EHLO amazonses.com
AUTH LOGIN
[base64 encoded SMTP user name]
[base64 encoded SMTP password]
MAIL FROM:[VERIFIED EMAIL IN SES]
RCPT TO:[VERIFIED EMAIL WITH SES RECEIPT RULE]
DATA
Subject: Demo from openssl
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="XXXXboundary text"

This is a multipart message in MIME format.

--XXXXboundary text
Content-Type: text/plain

Line1:This is a Test email sent to coded list of email addresses using the Amazon SES SMTP interface from openssl SMTP client.
Line2:Email_Rxers_Code:[ANYUSER1@DOMAIN_A,ANYUSER2@DOMAIN_B,ANYUSERX@DOMAIN_Y]:Email_Rxers_Code:
Line3:Last line.

--XXXXboundary text
Content-Type: text/plain;
Content-Transfer-Encoding: Base64
Content-Disposition: attachment; filename="demo64.txt"
Y29udGVudAo=
--XXXXboundary text
.
QUIT

Note: For base64 SMTP username and password above, use values obtained in Set up the SES SMTP interface, step 1. So for example, if the username is AKZB3LJAF5TQQRRPQZO1, then you can obtain base64 encoded value using following command:

echo -n AKZB3LJAF5TQQRRPQZO1 |base64
QUtaQjNMSkFGNVRRUVJSUFFaTzE=

This makes base64 encoded value QUtaQjNMSkFGNVRRUVJSUFFaTzE= Repeat same process for SMTP username and password values in the example above.

The openssl command should result in successful SMTP authentication and send. You should receive an email that looks like this:

Optimizing Security of the Solution

  1. Do not share DNS credentials. Unauthorized access can lead to domain control, potential denial of service, and AWS charges. Restrict access to authorized personnel only.
  2. Do not set the SENDER_EMAIL environment variable to the email address associated with the receipt rule. This address is a closely guarded secret, known only to administrators, and should be changed frequently.
  3. Review access to your code repository regularly to ensure there are no unauthorized changes to your code base.
  4. Utilize Permissions Boundaries to restrict the actions permitted by an IAM user or role.

Cleanup

To cleanup, start by navigating to Amazon SES Email receiving in AWS console, and setting the rule set to Inactive.

Once completed, delete the stack:

cdk destroy

Cleanup AWS SES Access Credentials

In Amazon SES Console, select Manage existing SMTP credentials, select the username for which credentials were created in Set up the SES SMTP interface above, navigate to the Security credentials tab and in the Access keys section, select Action -> Delete to delete AWS SES access credentials.

Troubleshooting

If you are not receiving the email or email is not being sent correctly there are a number of common causes of these errors:

  • HTTP Error 554 Message rejected email address is not verified. The following identities failed the check in region :
    • This means that you have attempted to send an email from address that has not been verified.
    • Please, ensure that the “MAIL FROM:[VERIFIED EMAIL IN SES]” email address sent via openssl matches the SenderEmail=<verified sender email> email address used in cdk deploy.
    • Also make sure this email address was used in Verify the sender email address step.
  • Email is not being delivered/forwarded
    • The incoming S3 bucket under the incoming prefix, contains file called AMAZON_SES_SETUP_NOTIFICATION. This means that MX record of the domain setup is missing. Please, validate that the MX record (step 2) of Verify your email domain with Amazon SES to receive emails section is fully configured.
    • Please ensure after deploying the Amazon SES solution, the created rule set was made active by navigating to Amazon SES Email receiving in AWS console, and set it to Active.
    • This may mean that the destination email address has bounced. Please, navigate to Amazon SES Suppression list in AWS console ensure that recipient’s email is not in the suppression list. If it is listed, you can see the reason in the “Suppression reason” column. There you may either manually remove from the suppression list or if the recipient email is not valid, consider using a different recipient email address.
AWS Legal Disclaimer: Sample code, software libraries, command line tools, proofs of concept, templates, or other related technology are provided as AWS Content or Third-Party Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content or Third-Party Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content or Third-Party Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content or Third-Party Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

About the Authors

Tarek Soliman

Tarek Soliman

Tarek is a Senior Solutions Architect at AWS. His background is in Software Engineering with a focus on distributed systems. He is passionate about diving into customer problems and solving them. He also enjoys building things using software, woodworking, and hobby electronics.

Dave Spencer

Dave Spencer

Dave is a Senior Solutions Architect at AWS. His background is in cloud solutions architecture, Infrastructure as Code (Iac), systems engineering, and embedded systems programming. Dave’s passion is developing partnerships with Department of Defense customers to maximize technology investments and realize their strategic vision.

Ayman Ishimwe

Ayman Ishimwe

Ayman is a Solutions Architect at AWS based in Seattle, Washington. He holds a Master’s degree in Software Engineering and IT from Oakland University. With prior experience in software development, specifically in building microservices for distributed web applications, he is passionate about helping customers build robust and scalable solutions on AWS cloud services following best practices.

Dmytro Protsiv

Dmytro Protsiv

Dmytro is a Cloud Applications Architect for with Amazon Web Services. He is passionate about helping customers to solve their business challenges around application modernization.

Stacy Conant

Stacy Conant

Stacy is a Solutions Architect working with DoD and US Navy customers. She enjoys helping customers understand how to harness big data and working on data analytics solutions. On the weekends, you can find Stacy crocheting, reading Harry Potter (again), playing with her dogs and cooking with her husband.

How to Implement Self-Managed Opt-Outs for SMS with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-implement-self-managed-opt-outs-for-sms-with-amazon-pinpoint/

Amazon Pinpoint offers marketers and developers the ability to send SMS to over 240 countries and/or regions around the world; giving users the global reach, scalability, cost effective pricing, and high deliverability that is required to build a successful SMS program. SMS is a flexible communication channel that facilitates different business requirements, including One-Time Password (OTP), reminders, and bulk marketing to name a few. Regardless of the content that you are sending via SMS there is a requirement to manage your recipients’ opt-in/out status. Read on to learn your two options for managing opt-outs and how you can configure them. Amazon Pinpoint offers a fully managed opt-out capability and the ability to self-manage the process with your own tools.

NOTE: If you are sending to US numbers with a toll-free (TFN) number the carriers will automatically manage those numbers and are not eligible for either of these processes.

Managed Opt-Out Process
If you prefer to have Pinpoint manage your opt-out processes you can refer to our blog “How to Manage SMS Opt-Outs with Amazon Pinpoint” to learn how to configure keywords and opt-out lists.

Self-Managed Opt-Out Process
Many customers use Pinpoint’s Managed Opt-Out process to deliver their communications, but some scenarios require the ability to self-manage this process. Self-managing the opt-out process provides more granular control over customer communication preferences and allows customers to centralize those preferences within their own applications.
Common reasons for organizations to implement self-managed opt-out include but aren’t limited to:

  1. Have an existing self-managed opt-out capability with a standing toolset that is already integrated with other aspects of their communication stack.
  2. Need multiple options for their customers to manage the communication preferences such as a web portal, call center, and mobile application to name a few.
  3. Require full control in order to implement custom logic that caters to their business needs.
  4. Want to change their SMS provider to Pinpoint while not changing what they have already built within an existing application.

How to implement Self-Managed Opt-Outs with Pinpoint
Choosing to self-manage your opt-outs requires some configuration within Pinpoint and the use of other AWS services. The solution outlined in this blog will use Amazon Pinpoint in addition to the following services:

  1. AWS Lambda
  2. Amazon DynamoDB
  3. Amazon SNS

NOTE: If you have existing services/applications that allow you to implement similar functionality as explained in this blog, you don’t have to use these services listed above.

What’s in scope?

This blog covers the following scenario:

  1. SMS is being sent with Amazon Pinpoint SMS and Voice V2 API – SendTextMessage
  2. You specify the Origination Identity (OID) to be used (short code, long code, 10DLC, etc) as the parameter to send the SMS to the destination phone number.

While the following scenarios can be self-managed this blog does not cover the following cases:

  1. You use a phone-pool to send SMS.
  2. You do not specify an OID in your call SendTextMessage call and let Amazon Pinpoint figure out and use the appropriate OID.

Keywords in scope

  1. Opt-out keywords – All Opt-out keywords mentioned in this document are included in the code.
  2. Opt-in keywords – This blog considers ‘JOIN’ as a valid keyword that SMS recipients can respond with to opt back in for the SMS communication.

NOTE: The code examples in this blog can be modified to add any additional custom keywords for your use case.

Assumptions/Prerequisites

  1. You have the necessary permissions to configure the following services in the same AWS account and region where Amazon Pinpoint is implemented.
    a. AWS Lambda
    b. Amazon DynamoDB
    c. Amazon SNS
  2. Your instance of Amazon Pinpoint has at least one OID approved and provisioned to send SMS.
    a. If you need help to determine what OID fits your use case(s) use this guide
    b. NOTE: Sender IDs do not have an ability to receive 2-way communication. If you are using Sender IDs, you still must manage opt-outs, but must do so by offering alternative ways of opting out such as a web portal, call center, and/or mobile applications.

Solution Overview

The solution proposed in this blog is fully serverless arhitecture and uses AWS managed services to eliminate a need for you to maintain and manage any of the infrastructure components.

  1. Your application invokes AWS Lambda function ‘InvokeSendTextMessage’ which calls Amazon Pinpoint SMS and Voice V2 API – SendTextMessage.
  2. The AWS Lambda function called ‘InvokeSendTextMessage’ performs the following tasks:
    2a. Fetches the latest item based on the descending order of the timestamp with the destination phone number and OID from Amazon DynamoDB table ‘SMSOptOut‘.
    2b. If an item is found with the customer response as any valid keyword for Opt-out (Refer section Keywords in scope), the process stops and Amazon Pinpoint APIs won’t be called by InvokeSendTextMessage function as the customer chose to opt-out.
    2c. If an item is not found or is found with the customer response as any valid keyword for Opt-in (Refer section Keywords in scope), the function calls Amazon Pinpoint SMS and Voice V2 API – SendTextMessage to deliver it to the customer/destination phone number.

NOTE: Amazon DynamoDB table can also be configured to receive the Opt-out or Opt-in information through various other channels (app, website, customer care etc.) if you have multiple interfaces for customer to do so but that is not in the scope for this blog.

Refer to the section, ‘InvokeSendTextMessage function code’ to understand the sample AWS Lambda function code. The code uses Python 3.12 language.

  1. The message is successfully delivered to a destination phone number.
  2. If the customers responds to the same OID (because you have enabled 2-way SMS feature) with a keyword that is a valid value from all the keywords in scope (Refer section Keywords in scope), Amazon SNS topic is configured with Amazon Pinpoint which captures the customer response.
    Note: The other keywords are not in scope for this blog, but you can specifically add all possible keywords that customers can respond with. There can be an accidental responses by a customer which will be ignored by the AWS Lambda code.
  3. AWS Lambda function ‘AddOptOutInDynamoDB’ is a subscriber to the topic in Amazon SNS and processes customer responses.
  1. AWS Lambda function ‘AddOptOutInDynamoDB’ performs few tasks as described below
  2. 6a. If the customer response is a keyword that is a valid value from all the keywords in scope (Refer section Keywords in scope), Amazon Lambda function ‘AddOptOutInDynamoDB‘ extracts the OID and customer phone number information from the response and adds the entry in the Amazon DynamoDB table ‘SMSOptOut’. This way Amazon DynamoDB table keeps getting latest customer opt-in/opt-out status.
    6b. Once the item is successfully put in dynamodb table ‘SMSOptOut‘, if the customer response was any Opt-out keyword (Refer section Keywords in scope), the function sends a SMS to the customer who has just opted out to confirm the status. “YOU HAVE BEEN UNSUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT “JOIN” TO THIS NUMBER TO BE RESUBSCRIBED”.
    If the customer response was any Opt-in keyword (Refer section Keywords in scope) the function sends a SMS to the customer who has just opted back in to confirm the status. “YOU HAVE BEEN SUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT “STOP” OR “UNSUBSCRIBE” TO THIS NUMBER TO BE UNSUBSCRIBED”. (But SMS recipients can still respond with any valid keyword mentioned in Opt-out keyword list)

NOTE: Refer the section ‘AddOptOutInDynamoDB function code’ to understand the sample code. The code uses Python3.12 language.

  1. The Opt-in/Opt-out status confirmation SMS is successfully delivered to a customer/destination phone number.

Amazon Pinpoint setup

  1. Enable 2-way SMS messaging for the OID that you procured. Refer to the screenshot below for your reference.

2-way SMS setting:

  1. Enable the self-managed opt-out feature for the OID. Once enabled, Amazon Pinpoint does not respond to opt-out messages sent to the SNS topic by your recipients. You can collect the response from the customers in an AWS SNS topic and process it as per your business needs.

Self Managed Opt-Out feature setting:

Amazon SNS setup

  1. On Amazon SNS console, click on ‘Topics’ and then ‘create a topic’ as shown below.

  1. Click ‘Create Subscription‘ to add the Lambda function ‘AddOptOutInDynamoDB‘ as a subscriber by using the Amazon Resource Name (ARN).

Amazon DynamoDB Setup

Table Name: ‘SMSOptOut

The Customer phone number is used as the primary key(PK). The sort key(SK) contains multiple values that include OID, timestamp, and the customer response separated by #. By having generic attribute names as PK and SK, you can expand the usage of this table for accommodating any custom business needs. For example: Customer can use any of the individual phone numbers like short code, long code, or 10DLC to send the SMS and any of these values can be accomodated as a part of the sort key (SK). The sort key can be used for granular retrieval to see the latest customer status (For example: ‘STOP‘). It can then additionally have attributes like OID, Timestamp, Response and others as per your requirements. The table uses On-demand Read/Write capacity mode. Refer to this document to understand On-demand capacity mode in detail.

Sample item in DynamoDB table is below


InvokeSendTextMessage function code

This AWS Lambda function calls Amazon Pinpoint SMS and Voice V2 API – SendTextMessage. It uses Query API for DynamoDB to scan the items for SourcePhoneNumber (customer phone number) and OID (Part of SK) in descending order of the timestamp. If an item exists with customer response value is a valid keyword for Opt-out (Refer section Keywords in scope), it means the customer has opted-out and the SMS can’t be sent. If no item is found or the customer response value is a valid keyword for Opt-in (Refer section Keywords in scope), the customer can be contacted and the funtion calls SendTextMessage API with the same OID and customer phone number.

import boto3
import os
from boto3.dynamodb.conditions import Key, Attr
import json

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('SMSOptOut')
pinpoint = boto3.client('pinpoint-sms-voice-v2')
OptOutKeyword = ['ARRET','CANCEL','END','OPT-OUT','OPTOUT','QUIT','REMOVE','STOP','TD','UNSUBSCRIBE']
OptInKeyword = ['JOIN']

#adds item in the DynamoDB table
def query_table(OID, SourcePhoneNumber):
    try:
        response = table.query(
            KeyConditionExpression=Key('PK').eq(SourcePhoneNumber) & Key('SK').begins_with(OID),
            ScanIndexForward=False,
            Limit=1
        )
    except Exception as e: 
        print("Error when writing an item in DynamoDB table: ",e) 
        raise e
    return response

#send opt-in/opt-out confirmation text using send_text_message.
def send_confirmation_text(SourcePhoneNumber,OID,messageBody,messageType): 
    
    try: 
        response = pinpoint.send_text_message(DestinationPhoneNumber=SourcePhoneNumber, 
        OriginationIdentity=OID,
        MessageBody=messageBody, 
        MessageType=messageType 
        ) 
    except Exception as e: 
        print("Error in sending text message using Pinpoint send_text_message api:", e) 
        raise e

#gets the message from SNS topic
def lambda_handler(event, context):
    OID = event['OID']
    SourcePhoneNumber = event['SourcePhoneNumber']

    response=query_table(OID, SourcePhoneNumber)
    items = response['Items']
    
    #Count number of items. The value will be either 1 or 0.
    count = len(items)

    # If the latest customer response is any OptOutKeyword
    if count == 1 and items[0]['Response'] in OptOutKeyword:
        print("Exit : Customer has opted out, do not send SMS")
    # If the latest customer response is any OptOutKeyword
    elif count == 0 or (count == 1 and items[0]['Response'] in OptInKeyword):
        send_confirmation_text(SourcePhoneNumber,OID,'This is a test message from Amazon Pinpoint','TRANSACTIONAL')
    # Only allowed values for customer response are valid OptOutKeyword or OptInKeyword 
    else: 
        print("The customer response is not one of the allowed keyword")

AddOptOutInDynamoDB Lambda function code

For example: When customer responds with ‘STOP’, the response is captured in the SNS topic that you configured with Amazon Pinpoint. The response json looks as shown below –

{
"originationNumber": "+1224xxxxxxx",
"destinationNumber": "+1844xxxxxxx",
"messageKeyword": "KEYWORD_xxxxxxxxxxxx",
"messageBody": "STOP",
"previousPublishedMessageId": "xxxxxxxxxxxxxx",
"inboundMessageId": "xxxxxxxxxxxxxx"
}

This Lambda function extracts the OID (destinationNumber), Customer phone number (originationNumber), and the customer response (messageBody) from the json payload above and adds an entry in the DynamoDB table (SMSOptOut). Once the item is put successfully in DynamoDB table, the function also sends out a confirmation SMS (either Opt-in or Opt-out) to the customer phone number using SendTextMessage.
For example:

  • If the customer response value is a valid keyword for Opt-out (Refer section Keywords in scope), the confirmation SMS is ‘YOU HAVE BEEN UNSUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT “JOIN” TO THIS NUMBER TO BE RESUBSCRIBED’.
  • If the customer response value is a valid keyword for Opt-in (Refer section Keywords in scope), the confirmation SMS is ‘YOU HAVE BEEN SUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT “STOP” OR “UNSUBSCRIBE” TO THIS NUMBER TO BE UNSUBSCRIBED’.
import json
import boto3
import datetime

dynamodb = boto3.resource('dynamodb')
dynamodb_table = dynamodb.Table('SMSOptOut')
pinpoint = boto3.client('pinpoint-sms-voice-v2')
OptOutKeyword = ['ARRET','CANCEL','END','OPT-OUT','OPTOUT','QUIT','REMOVE','STOP','TD','UNSUBSCRIBE']
OptInKeyword = ['JOIN']

#adds item in the DynamoDB table
def put_item(data,current_timestamp):
    print(dynamodb_table)
    try:
        response = dynamodb_table.put_item(
            Item = {
                'PK': data['originationNumber'],
                'SK': data['destinationNumber']+'#'+current_timestamp+'#'+data['messageBody'], 
                'OID': data['destinationNumber'],
                'Timestamp': current_timestamp,
                'Response': data['messageBody']
            }
        )
    except Exception as e:
        print("Error when writing an item in DynamoDB table: ",e)
        raise e

#send opt-in/opt-out confirmation text using send_text_message.
def send_confirmation_text(data,messageBody,messageType):
    try:
        response = pinpoint.send_text_message(
            DestinationPhoneNumber=data['originationNumber'],
            OriginationIdentity=data['destinationNumber'],
            MessageBody=messageBody,
            MessageType=messageType
            )
    except Exception as e:
        print("Error in sending text message using Pinpoint send_text_message api:", e)
        raise e
        
#gets the message from SNS topic
def lambda_handler(event, context):
    message = event['Records'][0]['Sns']['Message']
    data = json.loads(message)
    current_timestamp = datetime.datetime.now().isoformat()

    if data['messageBody'] in OptOutKeyword:
        put_item(data,current_timestamp)
        send_confirmation_text(data, 'YOU HAVE BEEN UNSUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT "JOIN" TO THIS NUMBER TO BE RESUBSCRIBED', 'TRANSACTIONAL')
    elif data['messageBody'] in OptInKeyword:
        put_item(data,current_timestamp)
        send_confirmation_text(data, 'YOU HAVE BEEN SUBSCRIBED. IF THIS WAS A MISTAKE PLEASE TEXT "STOP" OR "UNSUBSCRIBE" TO THIS NUMBER TO BE UNSUBSCRIBED', 'TRANSACTIONAL')
    else:
        print("The customer response is not one of the allowed keyword")

Clean Up

DynamoDB storage and any Lambda invocation will incur a cost so it is important to delete these resources if you do not plan on using them as shown below.

DynamoDB:

    1. On DynamoDB table, select table ‘SMSOptOut’ and click Delete. Confirm the action.

Lambda:

  1. On Lambda console, find the 2 functions you created and click Actions → Delete. Confirm the actions.

Amazon Pinpoint

  1. After deleting the DynamoDB table and Lambda functions your self-managed Opt-out flow will not work so you will need to disable self-managed opt-outs in Pinpoint for the respective OID as shown below.

Conclusion
In this post, you learned how to implement a self-managed opt-out workflow when using Pinpoint SMS. Keep in mind that when you implement the self-managed Opt-out flow, Pinpoint will not track or maintain any opt-out status for the OID that it was enabled for.

Take the time to plan out your approach, follow the steps outlined in this blog, and take advantage of any resources available to you within your support tier.

Decide what origination IDs you will need here
Review the documentation for the V2 SMS and Voice API here
Check out the support tiers comparison here

Resources:
https://docs.aws.amazon.com/sms-voice/latest/userguide/phone-numbers-sms-by-country.html
https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-limitations-opt-out.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-simulator.html
https://docs.aws.amazon.com/dynamodb/
https://docs.aws.amazon.com/sns/
https://docs.aws.amazon.com/lambda/

An introduction to Amazon WorkMail Audit Logging

Post Syndicated from Zip Zieper original https://aws.amazon.com/blogs/messaging-and-targeting/an-introduction-to-amazon-workmail-audit-logging/

Amazon WorkMail’s new audit logging capability equips email system administrators with powerful visibility into mailbox activities and system events across their organization. As announced in our recent “What’s New” post, this feature enables the comprehensive capture and delivery of critical email data, empowering administrators to monitor, analyze, and maintain compliance.

With audit logging, WorkMail records a wide range of events, including metadata about messages sent, received, and failed login attempts, and configuration changes. Administrators have the option to deliver these audit logs to their preferred AWS services, such as Amazon Simple Storage System (S3) for long-term storage, Amazon Kinesis Data Firehose for real-time data streaming, or Amazon CloudWatch Logs for centralized log management. Additionally, standard CloudWatch metrics on audit logs provide deep insights into the usage and health of WorkMail mailboxes within the organization.

By leveraging Amazon WorkMail’s audit logging capabilities, enterprises have the ability to strengthen their security posture, fulfill regulatory requirements, and gain critical visibility into the email activities that underpin their daily operations. This post will explore the technical details and practical use cases of this powerful new feature.

In this blog, you will learn how to configure your WorkMail organization to send email audit logs to Amazon CloudWatch Logs, Amazon S3, and Amazon Data Firehose . We’ll also provide examples that show how to monitor access to your Amazon WorkMail Organization’s mailboxes by querying the logs via CloudWatch Log Insights.

Email security

Imagine you are the email administrator for a biotech company, and you’ve received a report about spam complaints coming from your company’s email system. When you investigate, you learn these complaints point to unauthorized emails originating from several of your company’s mailboxes. One or more of your company’s email accounts may have been compromised by a hacker. You’ll need to determine the specific mailboxes involved, understand who has access to those mailboxes, and how the mailboxes have been accessed. This will be useful in identifying mailboxes with multiple failed logins or unfamiliar IP access, which can indicate unauthorized attempts or hacking. To identify the cause of the security breach, you require access to detailed audit logs and familiar tools to analyze extensive log data and locate the root of your issues.

Amazon WorkMail Audit Logging

Amazon WorkMail is a secure, managed business email service that hosts millions of mailboxes globally. WorkMail features robust audit logging capabilities, equipping IT administrators and security experts with in-depth analysis of mailbox usage patterns. Audit logging provides detailed insights into user activities within WorkMail. Organizations can detect potential security vulnerabilities by utilizing audit logs. These logs document user logins, access permissions, and other critical activities. WorkMail audit logging facilitates compliance with various regulatory requirements, providing a clear audit trail of data privacy and security. WorkMail’s audit logs are crucial for maintaining the integrity, confidentiality, and reliability of your organization’s email system.

Understanding WorkMail Audit Logging

Amazon WorkMail’s audit logging feature provides you with the data you need to have a thorough understanding of your email mailbox activities. By sending detailed logs to Amazon CloudWatch Logs, Amazon S3, and Amazon Data Firehose, administrators can identify mailbox access issues, track access by IP addresses, and review mailbox data movements or deletions using familiar tools. It is also possible to configure multiple destinations for each log to meet the needs of a variety of use cases, including compliance archiving.

WorkMail offers four audit logs:

  • ACCESS CONTROL LOGS – These logs record evaluations of access control rules, noting whether access to the endpoint was granted or denied in accordance with the configured rules;
  • AUTHENTICATION LOGS – These logs capture details of login activities, chronicling both successful and failed authentication attempts;
  • AVAILABILITY PROVIDER LOGS – These logs document the use of the Availability Providers feature, tracking its operational status and interactions feature;
  • MAILBOX ACCESS LOGS – Logs in this category record each attempt to access mailboxes within the WorkMail Organization, providing a detailed account of credential and protocol access patterns.

Once audit logging is enabled, alerts can be configured to warn of authentication or access anomalies that surpass predetermined thresholds. JSON formatting allows for advanced processing and analysis of audit logs by third party tools. Audit logging stores all interactions with the exception of web mail client authentication metrics.

WorkMail audit logging in action

Below are two examples that show how WorkMail’s audit logging can be used to investigate unauthorized login attempts, and diagnose a misconfigured email client. In both examples, we’ll use WorkMail’s Mailbox Access Control Logs and query the mailbox access control logs in CloudWatch Log Insights.

In our first example, we’re looking for unsuccessful login attempts in a target timeframe. In CloudWatch Log Insights we run this query:

fields user, source_ip, protocol, auth_successful, auth_failed_reason | filter auth_successful = 0

CloudWatch Log Insights returns all records in the timeframe, providing auth_succesful = 0 (false) and auth_failed_reason = Invalid username or password. We also see the source_ip, which we may decide to block in a WorkMail access control rule, or any other network security system.

Log - unsuccessful Login Attempt

Mailbox Access Control Log – an unsuccessful login attempt

In this next example, consider a WorkMail organization that has elected to block the IMAP protocol using a WorkMail access control rule (below):

WorkMail Access Control Rule blocking IMAP

WorkMail Access Control Rule – block IMAP protocol

Because some email clients use IMAP by default, occasionally new users in this example organization are denied access to email due to an incorrectly configured email client. Using WorkMail’s mailbox access control logs in CloudWatch Log Insights we run this query:

fields user_id, source_ip, protocol, rule_id, access_granted | filter access_granted = 0

And we see the user’s attempt to access their email inbox via IMAP has been denied by the access control rule_id (below):

WorkMail Access Control logs - IMAP blocked by access rule

WorkMail Access Control logs – IMAP blocked by access rule

Conclusion

Amazon WorkMail’s audit logging feature offers comprehensive view of your organization’s email activities. Four different logs provide visibility into access controls, authentication attempts, interactions with external systems, and mailbox activities. It provides flexible log delivery through native integration with AWS services and tools. Enabling WorkMail’s audit logging capabilities helps administrators meet compliance requirements and enhances the overall security and reliability of their email system.

To learn more about audit logging on Amazon WorkMail, you may comment on this post (below), view the WorkMail documentation, or reach out to your AWS account team.

To learn more about Amazon WorkMail, or to create a no-cost 30-day test organization, see Amazon WorkMail.

About the Authors

Miguel

Luis Miguel Flores dos Santos

Miguel is a Solutions Architect at AWS, boasting over a decade of expertise in solution architecture, encompassing both on-premises and cloud solutions. His focus lies on resilience, performance, and automation. Currently, he is delving into serverless computing. In his leisure time, he enjoys reading, riding motorcycles, and spending quality time with family and friends.

Andy Wong

Andy Wong

Andy Wong is a Sr. Product Manager with the Amazon WorkMail team. He has 10 years of diverse experience in supporting enterprise customers and scaling start-up companies across different industries. Andy’s favorite activities outside of technology are soccer, tennis and free-diving.

Zip

Zip

Zip is a Sr. Specialist Solutions Architect at AWS, working with Amazon Pinpoint and Simple Email Service and WorkMail. Outside of work he enjoys time with his family, cooking, mountain biking, boating, learning and beach plogging.

How to Send SMS Using a Sender ID with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-send-sms-using-a-sender-id-with-amazon-pinpoint/

Amazon Pinpoint enables you to send text messages (SMS) to recipients in over 240 regions and countries around the world. Pinpoint supports all types of origination identities including Long Codes, 10DLC (US only), Toll-Free (US only), Short Codes, and Sender IDs.
NOTE: Certain subtypes of Origination Identities (OIDs), such as Free to End User (FTEU) Short Codes might not be supported

Unlike other origination identities, a Sender ID can include letters, enabling your recipients to receive SMS from “EXAMPLECO” rather than a random string of numbers or phone number. A Sender ID can help to create trust and brand awareness with your recipients which can increase your deliverability and conversion rates by improving customer interaction with your messages. In this blog we will discuss countries that allow the use of Sender IDs, the types of Sender IDs that Pinpoint supports, how to configure Pinpoint to use a Sender ID, and best practices for sending SMS with a Sender ID. Refer to this blog post for guidance planning a multi-country rollout of SMS.

What is a Sender ID?

A sender ID is an alphanumeric name that identifies the sender of an SMS message. When you send an SMS message using a sender ID, and the recipient is in an area where sender ID is supported, your sender ID appears on the recipient’s device instead of a phone number, for example, they will see AMAZON instead of a phone number such as “1-206-555-1234”. Sender IDs support a default throughput of 10 messages per second (MPS), which can be raised in certain situations. This MPS is calculated at the country level. As an example, a customer can send 10 MPS with “SENDERX” to Australia (AU) and 10 MPS with “SENDERX” to Germany (DE).

The first step in deciding whether to use a Sender ID is finding out whether the use of a Sender ID is supported by the country(ies) you want to send to. This table lists all of the destinations that Pinpoint can send SMS to and the Origination Identities that they support. Many countries support multiple origination identities so it’s important to know the differences between them. The two main considerations when deciding what originator to use is throughput and whether it can support a two-way use case.

Amazon Pinpoint supports three types of Sender IDs detailed below. Your selection is dependent on the destination country for your messages, consult this table that lists all of the destinations that Pinpoint can send SMS to.

Dynamic Sender ID – A dynamic sender ID allows you to select a 3-11 character alphanumeric string to use as your originator when sending SMS. We suggest using something short that outlines your brand and use case like “[Company]OTP.” Dynamic sender IDs vary slightly by country and we recommended senders review the specific requirements for the countries they plan to send to. Pay special attention to any notes in the registration section. If the country(ies) you want to send to require registration, read on to the next type of Sender ID.

Registered Sender ID – A registered SenderID generally follows the same formatting requirements as a Dynamic Sender ID, alowing you to select a 3-11 character alphanumeric string to use, but has the added step of completing a registration specific to the country you want to use a Sender ID for. Each country will require slightly different information to register, may require specific forms, as well as a potential registration fee. Use this list of supported countries to see what countries support Sender ID as well as which ones require registration.

Generic, or “Shared” Sender ID – In countries where it is supported, when you do not specify a dynamic Sender ID, or you have not set a Default Sender ID in Pinpoint, the service may allow traffic over a Generic or Shared SenderID like NOTICE. Depending on the country, traffic could also be delivered using a service or carrier specific long or short code. When using the shared route your messages will be delivered alongside others also sending in this manner.

As mentioned, Sender IDs support 10 MPS, so if you do not need higher throughput than this may be a good option. However, one of the key differences of using a Sender ID to send SMS is that they do not support two-way use cases, meaning they cannot receive SMS back from your recipients.

IMPORTANT: If you use a sender ID you must provide your recipients with alternative ways to opt-out of your communications as they cannot text back any of the standard opt-out keywords or any custom opt-in keywords you may have configured. Common ways to offer your recipients an alternative way of opting out or changing their communication preferences include web forms or an app preference center.

How to Configure a Sender ID

The country(ies) you plan on sending to using a Sender ID will determine the configuration you will need to complete to be able to use them. We will walk through the configuration of each of the three types of Sender IDs below.

Step 1 – Request a Sender ID(Dependent on Country, Consult this List)

Some countries require a registration process. Each process, dependent on the country can be unique so it is required that a case be opened to complete this process. The countries requiring Sender ID registration are noted in the following list.
When you request a Sender ID, we provide you with an estimate of how long the request will take to complete. This estimate is based on the completion times that we’ve seen from other customers.

NOTE: This time is not an SLA. It is recommended that you check the case regularly and make sure that nothing else is required to complete your registration. If your registration requires edits it will extend this process each time it requires edits. If your registration passes over the estimated time it is recommended that you reply to the case.

Because each country has its own process, completion times for registration vary by destination country. For example, Sender ID registration in India can be completed in one week or less, whereas it can take six weeks or more in Vietnam. These requests can’t be expedited, because they involve the carriers themselves making changes to the ways that their networks are configured and certify the use case onto their network. We suggest that you start your registration process early so that you can start sending messages as soon as you launch your product or service.

IMPORTANT: Make sure that you are checking on your case often as support may need more details to complete your registration and any delay extends the expected timeline for procuring your Sender ID

Generic Sender ID – In countries that support a Generic or Shared ID like NOTICE there is no requirement to register or configure prior to sending we will review how to send with this type of Sender ID in Step 2.

Dynamic Sender ID – A Dynamic Sender ID can be requested via the API or in the console, complete the following steps to configure these Sender IDs in the console.
NOTE: If you are using the API to send it is not required that you request a Sender ID for every country that you intend on sending to. However, it is recommended, because the request process will alert you to any Sender IDs that require registration so you do not attempt to send to countries that you cannot deliver successfully to. All countries requiring registration for Sender IDs can be found here.

  1. Navigate to the SMS Console
    1. Make sure you are in the region you plan on using to send SMS out of as each region needs to be configured independently and any registrations also need to be made in the account and region in which you will be sending from
  2. Select “Sender IDs” from the left rail
    1. Click on “Request Originator”
    2. Choose a country from the drop down that supports Sender ID
    3. Choose “SMS”
      1. Leave “Voice” unchecked if it is an option.
        NOTE: If you choose Voice than you will not be able to select a Sender ID in the next step
      2. Select your estimated SMS volume
      3. Choose whether your company is local or international in relation to the country you are wanting to configure. Some countries, like India, require proof of residency to access local pricing so select accordingly.
      4. Select “No” for two-way messaging or you will not be able to select a Sender ID in the next step
    4. Click next and choose “Sender ID” and provide your preferred Sender ID.
      NOTE: Refer to the following criteria when selecting your Sender ID for configuration (some countries may override these)

      1. Numeric-only Sender IDs are not supported
      2. No special characters except for dashes ( – )
      3. No spaces
      4. Valid characters: a-z, A-Z, 0-9
      5. Minimum of 3 characters
      6. Maximum of 11 characters. NOTE: India is exactly 6 Characters
      7. Must match your company branding and SMS service or use case.
        1. For example, you wouldn’t be able to choose “OTP” or “2FA” even though you might be using SMS for that type of a use case. You could however use “ANYCO-OTP” if your company name was “Any Co.” since it complies with all above criteria.


NOTE: If the console instructs you to open a case as seen below than your Sender ID likely requires some form of registration. Read on to configure a Registered Sender ID.

Registered Sender ID – A registered sender ID follows the same criteria above for a Dynamic Sender ID, although some countries may have minor criteria changes or formatting restrictions. Follow the directions here to complete this process, AWS support will provide the correct forms needed for the country that you are registering. Each Registered Sender ID will need a separate case per country. Follow the link to the “AWS Support Center” and follow these instructions when creating your case

Step 2 – How to Send SMS with a Sender ID

Sender IDs can be used via three different mechanisms

Option 1 – Using the V2 SMS and Voice API and “SendTextMessage
This is the preferred method of sending and this set of APIs is where all new functionality will be built on.

  1. SendTextMessage has many options for configurability but the only required parameters are the following:
    1. DestinationPhoneNumber
    2. MessageBody
  2. “OriginationIdentity” is optional, but it’s important to know what the outcome is dependent on how you use this parameter:
    1. Explicitly stating your SenderId
      1. Use this option if you want to ONLY send with a Sender ID. Setting this has the effect of only sending to recipients in countries that accept SenderIDs and rejecting any recipients whose country does not support Sender IDs. The US for example cannot be sent to with a Sender ID
    2. Explicitly stating your SenderIdArn
      1. Same effect as “SenderID” above
    3. Leaving OriginationIdentity Blank
      1. If left blank Pinpoint will select the right originator based on what you have available in your account in order of decreasing throughput, from a Short Code, 10DLC (US Only), Long Code, Sender ID, or Toll-Free (US Only), depending on what you have available.
        1. Keep in mind that sending this way opens you up to sending to countries you may not have originators for. If you would like to make sure that you are only sending to countries that you have originators for then you need to use Pools.
    4. Explicitly stating a PoolId
      1. A pool is a collection of Origination Identities that can include both phone numbers and Sender IDs. Use this option if you are sending to multiple country codes and want to make sure that you send to them with the originator that their respective country supports.
        1. NOTE: There are various configurations that can be set on a pool. Refer to the documentation here
          1. Make sure to pay particular attention to “Shared Routes” because in some countries, Amazon Pinpoint SMS maintains a pool of shared origination identities. When you activate shared routes, Amazon Pinpoint SMS makes an effort to deliver your message using one of the shared identities. The origination identity could be a sender ID, long code, or short code and could vary within each country. Turn this feature off if you ONLY want to send to countries for which you have an originator.
          2. Make sure to read this blogpost on Pools and Opt – Outs here
    5. Explicitly stating a PoolArn
      1. Same effect as “PoolId” above

Option 2 – Using a journey or a Campaign

  1. If you do not select an “Origination Phone Number” or a Sender ID Pinpoint will select the correct originator based on the country code being attempted to send to and the originators available in your account.
    1. Pinpoint will attempt to send, in order of decreasing throughput, from a Short Code, 10DLC (US Only), Long Code, Sender ID, or Toll-Free (US Only), depending on what you have available. For example, if you want to send from a Sender ID to Germany (DE), but you have a Short Code configured for Germany (DE) as well, the default function is for Pinpoint to send from that Short Code. If you want to override this functionality you must specify a Sender ID to send from.
      1. NOTE: If you are sending to India on local routes you must fill out the “Entity ID and Template ID that you received when you registered your template with the Telecom Regulatory Authority of India (TRAI)
    2. You can set a default Sender ID for your Project in the SMS settings as seen below.
      NOTE: Anything you configure at the Campaign or Journey level overrides this project level setting

Option 3 – Using Messages in the Pinpoint API

  1. Using “Messages“ is the second option for sending via the API. This action allows for multi-channel(SMS, email, push, etc) bulkified sending but is not recommended to standardize on for SMS sending.
    1. NOTE: Using the V2 SMS and Voice API and “SendTextMessage” detailed in Option 1 above is the preferred method of sending SMS via the API and is where new features and functionality will be released. It is recommended that you migrate SMS sending to this set of APIs.

Conclusion:
In this post you learned about Sender IDs and how they can be used in your SMS program. A Sender ID can be a great option for getting your SMS program up and running quickly since they can be free, many countries do not require registration, and you can use the same Sender ID for lots of different countries, which can improve your branding and engagement. Keep in mind that one of the big differences in using a Sender ID vs. a short code or long code is that they don’t support 2-way communication. Common ways to offer your recipients an alternative way of opting out or changing their communication preferences include web forms or an app preference center.

A few resources to help you plan for your SMS program:
Use this spreadsheet to plan for the countries you need to send to Global SMS Planning Sheet
The V2 API for SMS and Voice has many more useful actions not possible with the V1 API so we encourage you to explore how it can further help you simplify and automate your applications.
If you are needing to use pools to access the “shared pools” setting read this blog to review how to configure them
Confirm the origination IDs you will need here
Check out the support tiers comparison here

How large senders can move from sandbox to production using Amazon SES?

Post Syndicated from Medha Karri original https://aws.amazon.com/blogs/messaging-and-targeting/how-large-senders-can-move-from-sandbox-to-production-using-amazon-ses/

Amazon SES: Email marketing has a potential ROI of $42 for every dollar spent (source link) making it a great tool for businesses whether it is for marketing campaigns, transactional notifications, or other communications. Amazon Simple Email Service (Amazon SES) is a cloud email service provider that can integrate into any application for bulk email sending. Amazon SES is an email service that supports a variety of use cases like transactional emails, system alerts, marketing/promotional/bulk emails, streamlined internal communications, and emails triggered by CRM system as a few examples.

Your journey with AWS began with creating an AWS account and your journey with Amazon SES likely began in the sandbox environment. To help prevent fraud and abuse, and to help protect your reputation as a sender, Amazon SES places all new accounts in the Amazon SES sandbox. Sandbox helps protect accounts from unauthorized use, accidental sends, and unexpected charges and is a safe space for testing with limited sending capabilities – up to 200 emails per day and a rate of 1 email per second.

Transitioning from Sandbox to Production: When you are ready to scale up to production, the process involves a few steps:

    1. Verify your email or domain: Prior to requesting production access, you have to verify an email address or sending domain. You can do that by clicking on Configuration > Verified Identities and click on Create identity button
    2. Access the set up page: On the Account dashboard page click on Get started (image 2.1) or go to Get set up page on the navigation frame on the left.
    3. Before requesting for production access, it is important to test throttling, bounce handling, and unsubscribe handling.
    4. Click on Request production access
    5. Production access form: This brings you to the page where you furnish details to get production access
        1. Enter if your mail type is marketing or transactional. Choose the option that best represents the types of messages you plan on sending. A marketing email promotes your products and services, while a transactional email is an immediate, trigger-based communication.
        2. Provide the URL for your website to help us better understand the kind of content you plan on sending.
        3. Use case description: Here is where you mention the following:
          1. Description: What does your company do and what do you plan on communicating with your users/subscribers through email?
          2. Use cases: Describe at a minimum, 1 or 2 of your use cases here and be descriptive of the use-cases you plan to use SES as a sender. You can also paste what a sample email for this use case looks like (please remove sensitive information)
          3. Mailing list: Describe how you plan to build or acquire your mailing list.
          4. Bounces & complaints: Describe how you handle bounces & complaints.
            1. Amazon SES provides you with resources to manage this. This is a guide on how you can set up notifications for bounces and complaints. After you are notified, how do you plan on handling the bounces and complaints?
          5. Unsubscribe: Describe how your email recipients can opt out of receiving email from you. Amazon SES provides subscription management and you can read more about it here. Additionally, you can read more about the latest email sender requirements here.
        4. Best practices:
          1. Success of your email program depends on various metrics such as bounces, complaints and message quality as listed here. Test your setup and your bounce/complaint processing before requesting production access.
          2. Mention if your account was denied earlier and the reasons for denial (any additional information you can provide will help speed up the process).
          3. Provide your daily and weekly email volumes.
          4. Provide your peak volume throughput or TPS (transactions/emails per second).
          5. We consider each request carefully. Therefore, it is important to provide specifics and not vague messages like “Please remove from sandbox and move to production” or “Please increase sending limit to 40 emails/sec”
          6. More best practices here.

Conclusion: Successfully moving from the sandbox to production in Amazon SES marks a significant step in leveraging email communication for your business. It’s not just about scaling your email capabilities; it’s about enhancing your engagement with customers and prospects through reliable, efficient email delivery. Continuously monitor your email performance, stay updated with Amazon SES features, and adapt your strategy to ensure your email campaigns remain effective and compliant. With these steps and insights, you’re well-equipped to make the most out of Amazon SES, turning it into a vital component of your digital communication strategy. Once your request has been approved, you’ll receive a confirmation from Amazon SES, and you’ll be ready to start sending emails to real recipients.

About the authors:

Medha Karri

Medha Karri is a Senior Product Manager at Amazon Simple Email Service at AWS. He is a technology enthusiast having varied experience in product management and software development. He is passionate to simplify complex technical solutions for customers and enjoys playing Xbox in his free time.

Vinay Ujjini

Vinay Ujjini is an Amazon Pinpoint and Amazon Simple Email Service Worldwide Principal Specialist Solutions Architect at AWS. He has been solving customer’s omni-channel challenges for over 15 years. He is an avid sports enthusiast and in his spare time, enjoys playing tennis & cricket.

Upgrade Your Email Tech Stack with Amazon SESv2 API

Post Syndicated from Zip Zieper original https://aws.amazon.com/blogs/messaging-and-targeting/upgrade-your-email-tech-stack-with-amazon-sesv2-api/

Amazon Simple Email Service (SES) is a cloud-based email sending service that helps businesses and developers send marketing and transactional emails. We introduced the SESv1 API in 2011 to provide developers with basic email sending capabilities through Amazon SES using HTTPS. In 2020, we introduced the redesigned Amazon SESv2 API, with new and updated features that make it easier and more efficient for developers to send email at scale.

This post will compare Amazon SESv1 API and Amazon SESv2 API and explain the advantages of transitioning your application code to the SESv2 API. We’ll also provide examples using the AWS Command-Line Interface (AWS CLI) that show the benefits of transitioning to the SESv2 API.

Amazon SESv1 API

The SESv1 API is a relatively simple API that provides basic functionality for sending and receiving emails. For over a decade, thousands of SES customers have used the SESv1 API to send billions of emails. Our customers’ developers routinely use the SESv1 APIs to verify email addresses, create rules, send emails, and customize bounce and complaint notifications. Our customers’ needs have become more advanced as the global email ecosystem has developed and matured. Unsurprisingly, we’ve received customer feedback requesting enhancements and new functionality within SES. To better support an expanding array of use cases and stay at the forefront of innovation, we developed the SESv2 APIs.

While the SESv1 API will continue to be supported, AWS is focused on advancing functionality through the SESv2 API. As new email sending capabilities are introduced, they will only be available through SESv2 API. Migrating to the SESv2 API provides customers with access to these, and future, optimizations and enhancements. Therefore, we encourage SES customers to consider the information in this blog, review their existing codebase, and migrate to SESv2 API in a timely manner.

Amazon SESv2 API

Released in 2020, the SESv2 API and SDK enable customers to build highly scalable and customized email applications with an expanded set of lightweight and easy to use API actions. Leveraging insights from current SES customers, the SESv2 API includes several new actions related to list and subscription management, the creation and management of dedicated IP pools, and updates to unsubscribe that address recent industry requirements.

One example of new functionality in SESv2 API is programmatic support for the SES Virtual Delivery Manager. Previously only addressable via the AWS console, VDM helps customers improve sending reputation and deliverability. SESv2 API includes vdmAttributes such as VdmEnabled and DashboardAttributes as well as vdmOptions. DashboardOptions and GaurdianOptions.

To improve developer efficiency and make the SESv2 API easier to use, we merged several SESv1 APIs into single commands. For example, in the SESv1 API you must make separate calls for createConfigurationSet, setReputationMetrics, setSendingEnabled, setTrackingOptions, and setDeliveryOption. In the SESv2 API, however, developers make a single call to createConfigurationSet and they can include trackingOptions, reputationOptions, sendingOptions, deliveryOptions. This can result in more concise code (see below).

SESv1-vs-SESv2

Another example of SESv2 API command consolidation is the GetIdentity action, which is a composite of SESv1 API’s GetIdentityVerificationAttributes, GetIdentityNotificationAttributes, GetCustomMailFromAttributes, GetDKIMAttributes, and GetIdentityPolicies. See SESv2 documentation for more details.

Why migrate to Amazon SESv2 API?

The SESv2 API offers an enhanced experience compared to the original SESv1 API. Compared to the SESv1 API, the SESv2 API provides a more modern interface and flexible options that make building scalable, high-volume email applications easier and more efficient. SESv2 enables rich email capabilities like template management, list subscription handling, and deliverability reporting. It provides developers with a more powerful and customizable set of tools with improved security measures to build and optimize inbox placement and reputation management. Taken as a whole, the SESv2 APIs provide an even stronger foundation for sending critical communications and campaign email messages effectively at a scale.

Migrating your applications to SESv2 API will benefit your email marketing and communication capabilities with:

  1. New and Enhanced Features: Amazon SESv2 API includes new actions as well as enhancements that provide better functionality and improved email management. By moving to the latest version, you’ll be able to optimize your email sending process. A few examples include:
    • Increase the maximum message size (including attachments) from 10Mb (SESv1) to 40Mb (SESv2) for both sending and receiving.
    • Access key actions for the SES Virtual Deliverability Manager (VDM) which provides insights into your sending and delivery data. VDM provides near-realtime advice on how to fix the issues that are negatively affecting your delivery success rate and reputation.
    • Meet Google & Yahoo’s June 2024 unsubscribe requirements with the SES v2 SendEmail action. For more information, see the “What’s New blog”
  2. Future-proof Your Application: Avoid potential compatibility issues and disruptions by keeping your application up-to-date with the latest version of the Amazon SESv2 API via the AWS SDK.
  3. Improve Usability and Developer Experience: Amazon SESv2 API is designed to be more user-friendly and consistent with other AWS services. It is a more intuitive API with better error handling, making it easier to develop, maintain, and troubleshoot your email sending applications.

Migrating to the latest SESv2 API and SDK positions customers for success in creating reliable and scalable email services for their businesses.

What does migration to the SESv2 API entail?

While SESv2 API builds on the v1 API, the v2 API actions don’t universally map exactly to the v1 API actions. Current SES customers that intend to migrate to SESv2 API will need to identify the SESv1 API actions in their code and plan to refactor for v2. When planning the migration, it is essential to consider several important considerations:

  1. Customers with applications that receive email using SESv1 API’s CreateReceiptFilter, CreateReceiptRule or CreateReceiptRuleSet actions must continue using the SESv1 API client for these actions. SESv1 and SESv2 can be used in the same application, where needed.
  2. We recommend all customers follow the security best practice of “least privilege” with their IAM policies. As such, customers may need to review and update their policies to include the new and modified API actions introduced in SESv2 before migrating. Taking the time to properly configure permissions ensures a seamless transition while maintaining a securely optimized level of access. See documentation.

Below is an example of an IAM policy with a user with limited allow privileges related to several SESv1 Identity actions only:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ses:VerifyEmailIdentity",
                "ses:Deleteldentity",
                "ses:VerifyDomainDkim",
                "ses:ListIdentities",
                "ses:VerifyDomainIdentity"
            ],
            "Resource": "*"
        }
    ]
}

When updating to SESv2, you need to update this user’s permissions with the SESv2 actions shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ses:CreateEmailIdentity",
                "ses:DeleteEmailIdentity",
                "ses:GetEmailIdentity",
                "ses:ListEmailIdentities"
            ],
            "Resource": "*"
        }
    ]
}

Examples of SESv1 vs. SESv2 APIs

Let’s look at a three examples that compare the SESv1 API with the SESv2 API.

LIST APIs

When listing identities in SESv1 list API, you need to specify type which requires multiple calls to API to list all resources:

aws ses list-identities --identity-type Domain
{
    "Identities": [
        "example.com"
    ]
}
aws ses list-identities --identity-type EmailAddress
{
    "Identities": [
        "[email protected]",
        "[email protected]",
        "[email protected]"
    ]
}

With SESv2, you can simply call a single API. Additionally, SESv2 also provides extended feedback:

aws sesv2 list-email-identities
{
    "EmailIdentities": [
        {
            "IdentityType": "DOMAIN",
            "IdentityName": "example.com",
            "SendingEnabled": false,
            "VerificationStatus": "FAILED"
        },
        {
            "IdentityType": "EMAIL_ADDRESS",
            "IdentityName": "[email protected]",
            "SendingEnabled": true,
            "VerificationStatus": "SUCCESS"
        },
        {
            "IdentityType": "EMAIL_ADDRESS",
            "IdentityName": "[email protected]",
            "SendingEnabled": false,
            "VerificationStatus": "FAILED"
        },
        {
            "IdentityType": "EMAIL_ADDRESS",
            "IdentityName": "[email protected]",
            "SendingEnabled": true,
            "VerificationStatus": "SUCCESS"
        }
    ]
}

CREATE APIs

With SESv1, creating email addresses or domains requires calling two different APIs:

aws ses verify-email-identity --email-address [email protected]
aws ses verify-domain-dkim --domain example.com
{
    "DkimTokens": [
        "mwmzhwhcebfh5kvwv7zahdatahimucqi",
        "dmlozjwrdbrjfwothoh26x6izvyts7qx",
        "le5fy6pintdkbxg6gdoetgbrdvyp664v"
    ]
}

With SESv2, we build an abstraction so you can call a single API. Additionally, SESv2 provides more detailed responses and feedback:

aws sesv2 create-email-identity --email-identity [email protected]
{
    "IdentityType": "EMAIL_ADDRESS",
    "VerifiedForSendingStatus": false
}
aws sesv2 create-email-identity --email-identity example.com
{
    "IdentityType": "DOMAIN",
    "VerifiedForSendingStatus": false,
    "DkimAttributes": {
        "SigningEnabled": true,
        "Status": "NOT_STARTED",
        "Tokens": [
            "mwmzhwhcebfh5kvwv7zahdatahimucqi",
            "dmlozjwrdbrjfwothoh26x6izvyts7qx",
            "le5fy6pintdkbxg6gdoetgbrdvyp664v"
        ],
        "SigningAttributesOrigin": "AWS_SES",
        "NextSigningKeyLength": "RSA_2048_BIT",
        "CurrentSigningKeyLength": "RSA_2048_BIT",
        "LastKeyGenerationTimestamp": "2024-02-23T15:01:53.849000+00:00"
    }
}

DELETE APIs

When calling delete- with SESv1, SES returns 200 (or no response), even if the identity was previously deleted or doesn’t exist:

 aws ses delete-identity --identity example.com

SESv2 provides better error handling and responses when calling the delete API:

aws sesv2 delete-email-identity --email-identity example.com

An error occurred (NotFoundException) when calling the DeleteEmailIdentity operation: Email identity example.com does not exist.

Hands-on with SESv1 API vs. SESv2 API

Below are a few examples you can use to explore the differences between SESv1 API and the SESv2 API. To complete these exercises, you’ll need:

  1. AWS Account (setup) with enough permission to interact with the SES service via the CLI
  2. Upgrade to the latest version of the AWS CLI (aws-cli/2.15.27 or greater)
  3. SES enabled, configured and properly sending emails
  4. A recipient email address with which you can check inbound messages (if you’re in the SES Sandbox, this email must be verified email identity). In the following examples, replace [email protected] with the verified email identity.
  5. Your preferred IDE with AWS credentials and necessary permissions (you can also use AWS CloudShell)

Open the AWS CLI (or AWS CloudShell) and:

  1. Create a test directory called v1-v2-test.
  2. Create the following (8) files in the v1-v2-test directory:

destination.json (replace [email protected] with the verified email identity):

{ 
    "ToAddresses": ["[email protected]"] 
}

ses-v1-message.json

{
   "Subject": {
       "Data": "SESv1 API email sent using the AWS CLI",
       "Charset": "UTF-8"
   },
   "Body": {
       "Text": {
           "Data": "This is the message body from SESv1 API in text format.",
           "Charset": "UTF-8"
       },
       "Html": {
           "Data": "This message body from SESv1 API, it contains HTML formatting. For example - you can include links: <a class=\"ulink\" href=\"http://docs.aws.amazon.com/ses/latest/DeveloperGuide\" target=\"_blank\">Amazon SES Developer Guide</a>.",
           "Charset": "UTF-8"
       }
   }
}

ses-v1-raw-message.json (replace [email protected] with the verified email identity):

{
     "Data": "From: [email protected]\nTo: [email protected]\nSubject: Test email sent using the SESv1 API and the AWS CLI \nMIME-Version: 1.0\nContent-Type: text/plain\n\nThis is the message body from the SESv1 API SendRawEmail.\n\n"
}

ses-v1-template.json (replace [email protected] with the verified email identity):

{
  "Source":"SES Developer<[email protected]>",
  "Template": "my-template",
  "Destination": {
    "ToAddresses": [ "[email protected]"
    ]
  },
  "TemplateData": "{ \"name\":\"SESv1 Developer\", \"favoriteanimal\": \"alligator\" }"
}

my-template.json (replace [email protected] with the verified email identity):

{
  "Template": {
    "TemplateName": "my-template",
    "SubjectPart": "Greetings SES Developer, {{name}}!",
    "HtmlPart": "<h1>Hello {{name}},</h1><p>Your favorite animal is {{favoriteanimal}}.</p>",
    "TextPart": "Dear {{name}},\r\nYour favorite animal is {{favoriteanimal}}."
  }
}

ses-v2-simple.json (replace [email protected] with the verified email identity):

{
    "FromEmailAddress": "[email protected]",
    "Destination": {
        "ToAddresses": [
            "[email protected]"
        ]
    },
    "Content": {
        "Simple": {
            "Subject": {
                "Data": "SESv2 API email sent using the AWS CLI",
                "Charset": "utf-8"
            },
            "Body": {
                "Text": {
                    "Data": "SESv2 API email sent using the AWS CLI",
                    "Charset": "utf-8"
                }
            },
            "Headers": [
                {
                    "Name": "List-Unsubscribe",
                    "Value": "insert-list-unsubscribe-here"
                },
				{
                    "Name": "List-Unsubscribe-Post",
                    "Value": "List-Unsubscribe=One-Click"
                }
            ]
        }
    }
}

ses-v2-raw.json (replace [email protected] with the verified email identity):

{
     "FromEmailAddress": "[email protected]",
     "Destination": {
            "ToAddresses": [
                       "[email protected]"
              ]
       },
      "Content": {
             "Raw": {
                     "Data": "Subject: Test email sent using SESv2 API via the AWS CLI \nMIME-Version: 1.0\nContent-Type: text/plain\n\nThis is the message body from SendEmail Raw Content SESv2.\n\n"
              }
      }
}

ses-v2-tempate.json (replace [email protected] with the verified email identity):

{
     "FromEmailAddress": "[email protected]",
     "Destination": {
       "ToAddresses": [
         "[email protected]"
       ]
     },
     "Content": {
        "Template": {
          "TemplateName": "my-template",
          "TemplateData": "{ \"name\":\"SESv2 Developer\",\"favoriteanimal\":\"Dog\" }",
          "Headers": [
                {
                   "Name": "List-Unsubscribe",
                   "Value": "insert-list-unsubscribe-here"
                },
                {
                   "Name": "List-Unsubscribe-Post",
                   "Value": "List-Unsubscribe=One-Click"
                }
             ]
         }
     }
}

Perform the following commands using the SESv1 API:

send-email (simple):

aws ses send-email --from [email protected] --destination file://destination.json --message file://ses-v1-message.json 
  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity.
{
    "MessageId": "0100018dc7649400-Xx1x0000x-bcec-483a-b97c-123a4567890d-xxxxx"
}

send-raw-email:

  • In the CLI, run:
aws ses send-raw-email  --cli-binary-format raw-in-base64-out --raw-message file://ses-v1-raw-message.json 
  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity.
{
   "MessageId": "0200018dc7649400-Xx1x1234x-bcec-483a-b97c-123a4567890d-
}

send templated mail:

  • In the CLI, run the following to create the template:
aws ses create-template  --cli-input-json file://my-template.json
  • In the CLI, run:

aws ses send-templated-email --cli-input-json file://ses-v1-template.json

  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity.
 {
    "MessageId": "0000018dc7649400-Xx1x1234x-bcec-483a-b97c-123a4567890d-xxxxx"
 }

Perform similar commands using the SESv2 API:

As mentioned above, customers who are using least privilege permissions with SESv1 API must first update their IAM policies before running the SESv2 API examples below. See documentation for more info.

As you can see from the .json files we created for SES v2 API (above), you can modify or remove sections from the .json files, based on the type of email content (simple, raw or templated) you want to send.

Please ensure you are using the latest version of the AWS CLI (aws-cli/2.15.27 or greater).

Send simple email

  • In the CLI, run:
aws sesv2 send-email --cli-input-json file://ses-v2-simple.json
  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity
{
    "MessageId": "0100018dc83ba7e0-7b3149d7-3616-49c2-92b6-00e7d574f567-000000"
}

Send raw email (note – if the only reason is to set custom headers, you don’t need to send raw email)

  • In the CLI, run:
aws sesv2 send-email --cli-binary-format raw-in-base64-out --cli-input-json file://ses-v2-raw.json
  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity.
{
    "MessageId": "0100018dc877bde5-fdff0df3-838e-4f51-8582-a05237daecc7-000000"
}

Send templated email

  • In the CLI, run:
aws sesv2 send-email --cli-input-json file://ses-v2-tempate.json
  • The response will return a valid MessageID (signaling the action was successful). An email will be received by the verified email identity.
{
    "MessageId": "0100018dc87fe72c-f2c547a1-2325-4be4-bf78-b91d6648cd12-000000"
}

Migrating your application code to SESv2 API

As you can see from the examples above, SESv2 API shares much of its syntax and actions with the SESv1 API. As a result, most customers have found they can readily evaluate, identify and migrate their application code base in a relatively short period of time. However, it’s important to note that while the process is generally straightforward, there may be some nuances and differences to consider depending on your specific use case and programming language.

Regardless of the language, you’ll need anywhere from a few hours to a few weeks to:

  • Update your code to use SESv2 Client and change API signature and request parameters
  • Update permissions / policies to reflect SESv2 API requirements
  • Test your migrated code to ensure that it functions correctly with the SESv2 API
  • Stage, test
  • Deploy

Summary

As we’ve described in this post, Amazon SES customers that migrate to the SESv2 API will benefit from updated capabilities, a more user-friendly and intuitive API, better error handling and improved deliverability controls. The SESv2 API also provide for compliance with the industry’s upcoming unsubscribe header requirements, more flexible subscription-list management, and support for larger attachments. Taken collectively, these improvements make it even easier for customers to develop, maintain, and troubleshoot their email sending applications with Amazon Simple Email Service. For these, and future reasons, we recommend SES customers migrate their existing applications to the SESv2 API immediately.

For more information regarding the SESv2 APIs, comment on this post, reach out to your AWS account team, or consult the AWS SESv2 API documentation:

About the Authors

zip

Zip

Zip is an Amazon Pinpoint and Amazon Simple Email Service Sr. Specialist Solutions Architect at AWS. Outside of work he enjoys time with his family, cooking, mountain biking and plogging.

Vinay_Ujjini

Vinay Ujjini

Vinay is an Amazon Pinpoint and Amazon Simple Email Service Worldwide Principal Specialist Solutions Architect at AWS. He has been solving customer’s omni-channel challenges for over 15 years. He is an avid sports enthusiast and in his spare time, enjoys playing tennis and cricket.

Dmitrijs_Lobanovskis

Dmitrijs Lobanovskis

Dmitrijs is a Software Engineer for Amazon Simple Email service. When not working, he enjoys traveling, hiking and going to the gym.

Message delivery status tracking with Amazon Pinpoint

Post Syndicated from Brijesh Pati original https://aws.amazon.com/blogs/messaging-and-targeting/message-delivery-status-tracking-with-amazon-pinpoint/

In the vast landscape of digital communication, reaching your audience effectively is key to building successful customer relationships. Amazon Pinpoint – Amazon Web Services’ (AWS) flexible, user-focused messaging and targeting solution goes beyond mere messaging; it allows businesses to engage customers through email, SMS, push notifications, and more.

What sets Amazon Pinpoint apart is its scalability and deliverability. Amazon Pinpoint supports a multitude of business use cases, from promotional campaigns and transactional messages to customer engagement journeys. It provides insights and analytics that help tailor and measure the effectiveness of communication strategies.

For businesses, the power of this platform extends into areas such as marketing automation, customer retention campaigns, and transactional messaging for updates like order confirmations and shipping alerts. The versatility of Amazon Pinpoint can be a significant asset in crafting personalized user experiences at scale.

Use Case & Solution overview – Tracking SMS & Email Delivery Status

In a business setting, understanding whether a time-sensitive email or SMS was received can greatly impact customer experience as well as operational efficiency. For instance, consider an e-commerce platform sending out shipping notifications. By quickly verifying that the message was delivered, businesses can preemptively address any potential issues, ensuring customer satisfaction.

Amazon Pinpoint tracks email and SMS delivery and engagement events, which can be streamed using Amazon Kinesis Firehose for storage or further processing. However, third party applications don’t have a direct API to query and obtain the latest status of a message.

To address the above challenge, this blog presents a solution that leverages AWS services for data streaming, storage, and retrieval of Amazon Pinpoint events using a simple API call. At the core of the solution is Amazon Pinpoint event stream capability, which utilizes Amazon Kinesis services for data streaming.

The architecture for message delivery status tracking with Amazon Pinpoint is comprised of several AWS services that work in concert. To streamline the deployment of these components, they have been encapsulated into an AWS CloudFormation template. This template allows for automated provisioning and configuration of the necessary AWS resources, ensuring a repeatable and error-free deployment.

The key components of the solution are as follows:

  1. Event Generation: An event is generated within Amazon Pinpoint when a user interacts with an application, or when a message is sent from a campaign, journey, or as a transactional communication. The event name and metadata depends on the channel SMS or Email.
  2. Amazon Pinpoint Event Data Streaming: The generated event data is streamed to Amazon Kinesis Data Firehose. Kinesis Data Firehose is configured to collect the event information in near real-time, enabling the subsequent processing and analysis of the data.
  3. Pinpoint Event Data Processing: Amazon Kinesis Data Firehose is configured to invoke a specified AWS Lambda function to transform the incoming source data. This transformation step is set up during the creation of the Kinesis Data Firehose delivery stream, ensuring that the data is in the correct format before it is stored, enhancing its utility for immediate and downstream analysis. The Lambda function acts as a transformation mechanism for event data ingested through Kinesis Data Firehose. The function decodes the base64-encoded event data, deserializes the JSON payload, and processes the data depending on the event type (email or SMS)- it parses the raw data, extracting relevant attributes before ingesting it into Amazon DynamoDB. The function handles different event types, specifically email and SMS events, discerning their unique attributes and ensuring they are formatted correctly for DynamoDB’s schema.
  4. Data Ingestion into Dynamo DB: Once processed, the data is stored in Amazon DynamoDB. DynamoDB provides a fast and flexible NoSQL database service, which facilitates the efficient storage and retrieval of event data for analysis.
  5. Data Storage: Amazon DynamoDB stores the event data after it’s been processed by AWS Lambda. Amazon DynamoDB is a highly scalable NoSQL database that enables fast queries, which is essential for retrieving the status of messages quickly and efficiently, thereby facilitating timely decision-making based on customer interactions.
  6. Customer application/interface: Users or integrated systems engage with the messaging status through either a frontend customer application or directly via an API. This interface or API acts as the conduit through which message delivery statuses are queried, monitored, and managed, providing a versatile gateway for both user interaction and programmatic access.
  7. API Management: The customer application communicates with the backend systems through Amazon API Gateway. This service acts as a fully managed gateway, handling all the API calls, data transformation, and transfer between the frontend application and backend services.
  8. Event Status Retrieval API: When the API Gateway receives a delivery status request, it invokes another AWS Lambda function that is responsible for querying the DynamoDB table. It retrieves the latest status of the message delivery, which is then presented to the user via the API.

DynamoDB Table Design for Message Tracking:

The tables below outline the DynamoDB schema designed for the efficient storage and retrieval of message statuses, detailing distinct event statuses and attributes for each message type such as email and SMS:

Attributes for Email Events:

Attribute Data type Description
message_id String The unique message ID generated by Amazon Pinpoint.
event_type String The value would be ’email’.
aws_account_id String The AWS account ID used to send the email.
from_address String The sending identity used to send the email.
destination String The recipient’s email address.
client String The client ID if applicable
campaign_id String The campaign ID if part of a campaign
journey_id String The journey ID if part of a journey
send Timestamp The timestamp when Amazon Pinpoint accepted the message and attempted to deliver it to the recipient
delivered Timestamp The timestamp when the email was delivered, or ‘NA’ if not delivered.
rejected Timestamp The timestamp when the email was rejected (Amazon Pinpoint determined that the message contained malware and didn’t attempt to send it.)
hardbounce Timestamp The timestamp when a hard bounce occurred (A permanent issue prevented Amazon Pinpoint from delivering the message. Amazon Pinpoint won’t attempt to deliver the message again)
softbounce Timestamp The timestamp when a soft bounce occurred (A temporary issue prevented Amazon Pinpoint from delivering the message. Amazon Pinpoint will attempt to deliver the message again for a certain amount of time. If the message still can’t be delivered, no more retries will be attempted. The final state of the email will then be SOFTBOUNCE.)
complaint Timestamp The timestamp when a complaint was received (The recipient received the message, and then reported the message to their email provider as spam (for example, by using the “Report Spam” feature of their email client).
open Timestamp The timestamp when the email was opened (The recipient received the message and opened it.)
click Timestamp The timestamp when a link in the email was clicked. (The recipient received the message and clicked a link in it)
unsubscribe Timestamp The timestamp when a link in the email was unsubscribed (The recipient received the message and clicked an unsubscribe link in it.)
rendering_failure Timestamp The timestamp when a link in the email was clicked (The email was not sent due to a rendering failure. This can occur when template data is missing or when there is a mismatch between template parameters and data.)

Attributes for SMS Events:

Attribute Data type Description
message_id String The unique message ID generated by Amazon Pinpoint.
event_type String The value would be ‘sms’.
aws_account_id String The AWS account ID used to send the email.
origination_phone_number String The phone number from which the SMS was sent.
destination_phone_number String The phone number to which the SMS was sent.
record_status String Additional information about the status of the message. Possible values include:
– SUCCESSFUL/DELIVERED – Successfully delivered.
– PENDING – Not yet delivered.
– INVALID – Invalid destination phone number.
– UNREACHABLE – Recipient’s device unreachable.
– UNKNOWN – Error preventing delivery.
– BLOCKED – Device blocking SMS.
– CARRIER_UNREACHABLE – Carrier issue preventing delivery.
– SPAM – Message identified as spam.
– INVALID_MESSAGE – Invalid SMS message body.
– CARRIER_BLOCKED – Carrier blocked message.
– TTL_EXPIRED – Message not delivered in time.
– MAX_PRICE_EXCEEDED – Exceeded SMS spending quota.
– OPTED_OUT – Recipient opted out.
– NO_QUOTA_LEFT_ON_ACCOUNT – Insufficient spending quota.
– NO_ORIGINATION_IDENTITY_AVAILABLE_TO_SEND – No suitable origination identity.
– DESTINATION_COUNTRY_NOT_SUPPORTED – Destination country blocked.
– ACCOUNT_IN_SANDBOX – Account in sandbox mode.
– RATE_EXCEEDED – Message sending rate exceeded.
– INVALID_ORIGINATION_IDENTITY – Invalid origination identity.
– ORIGINATION_IDENTITY_DOES_NOT_EXIST – Non-existent origination identity.
– INVALID_DLT_PARAMETERS – Invalid DLT parameters.
– INVALID_PARAMETERS – Invalid parameters.
– ACCESS_DENIED – Account blocked from sending messages.
– INVALID_KEYWORD – Invalid keyword.
– INVALID_SENDER_ID – Invalid Sender ID.
– INVALID_POOL_ID – Invalid Pool ID.
– SENDER_ID_NOT_SUPPORTED_FOR_DESTINATION – Sender ID not supported.
– INVALID_PHONE_NUMBER – Invalid origination phone number.
iso_country_code String The ISO country code associated with the destination phone number.
message_type String The type of SMS message sent.
campaign_id String The campaign ID if part of a campaign, otherwise N/A.
journey_id String The journey ID if part of a journey, otherwise N/A.
success Timestamp The timestamp when the SMS was successfully accepted by the carrier/delivered to the recipient, or ‘NA’ if not applicable.
buffered Timestamp The timestamp when the SMS is still in the process of being delivered to the recipient, or ‘NA’ if not applicable.
failure Timestamp The timestamp when the SMS delivery failed, or ‘NA’ if not applicable.
complaint Timestamp The timestamp when a complaint was received (The recipient received the message, and then reported the message to their email provider as spam (for example, by using the “Report Spam” feature of their email client).
optout Timestamp The timestamp when the customer received the message and replied by sending the opt-out keyword (usually “STOP”), or ‘NA’ if not applicable.
price_in_millicents_usd Number The amount that was charged to send the message.

Prerequisites

  • AWS Account Access (setup) with admin-level permission.
  • AWS CLI version 2 with named profile setup. If a locally configured IDE is not convenient, you can use the AWS CLI from the AWS CloudShell in your browser.
  • A Pinpoint project that has never been configured with an event stream (PinpointEventStream).“
  • The Pinpoint ID from the project you want to monitor. This ID can be found in the AWS Pinpoint console on the project’s main page (it will look something like “79788ecad55555513b71752a4e3ea1111”). Copy this ID to a text file, as you will need it shortly.
    • Note, you must use the ID from a Pinpoint project that has never been configured with the PinpointEventStream option.

Solution Deployment & Testing

Deploying this solution is a straightforward process, thanks to the AWS CloudFormation template we’ve created. This template automates the creation and configuration of the necessary AWS resources into an AWS stack. The CloudFormation template ensures that the components such as Kinesis Data Firehose, AWS Lambda, Amazon DynamoDB, and Amazon API Gateway are set up consistently and correctly.

Deployment Steps:

  • Download the CloudFormation Template from this GitHub sample repository. The CloudFormation template is authored in JSON and named PinpointAPIBlog.yaml.
  • Access the CloudFormation Console: Sign into the AWS Management Console and open the AWS CloudFormation console.
  • Create a New Stack:
    • Choose Create Stack and select With new resources (standard) to start the stack creation process.
    • Under Prerequisite – Prepare template, select Template is ready.
    • Under ‘Specify template’, choose Upload a template file, and then upload the CloudFormation template file you downloaded in Step 1.
  • Configure the Stack:
    • Provide a stack name, such as “pinpoint-yourprojectname-monitoring” and paste the Pinpoint project (application) ID. Press Next.
    • Review the stack settings, and make any necessary changes based on your specific requirements. Next.
  • Initiate the Stack Creation: Once you’ve configured all options, acknowledge that AWS CloudFormation might create IAM resources with custom names, and then choose Create stack.
    • AWS CloudFormation will now provision and configure the resources as defined in the template This will take about 20 minutes to fully deploy. You can view the status in the AWS CloudFormation console.

Testing the Solution:

After deployment is complete you can test (and use) the solution.

  • Send Test Messages: Utilize the Amazon Pinpoint console to send test email and SMS messages. Documentation for this can be found at:
  • Verify Lambda Execution:
    • Navigate to the AWS CloudWatch console.
    • Locate and review the logs for the Lambda functions specified in the solution (`aws/lambda/{functionName}`) to confirm that the Kinesis Data Firehose records are being processed successfully. In the log events you should see messages including INIT_START, Raw Kinesis Data Firehouse Record, etc.
  • Check Amazon DynamoDB Data:
    • Navigate to Amazon DynamoDB in the AWS Console.
    • Select the table created by the CloudFormation template and choose ‘Explore Table Items‘.
    • Confirm the presence of the event data by checking if the message IDs appear in the table.
    • The table should have one or more message_id entries from the test message(s) you sent above.
    • Click on a message_id to review the data, and copy the message_id to a text editor on your computer. It will look like “0201123456gs3nroo-clv5s8pf-8cq2-he0a-ji96-59nr4tgva0g0-343434
  • API Gateway Testing:
    • In the API Gateway console, find the MessageIdAPI.
    • Navigate to Stages and copy the Invoke URL provided.

    • Open the text editor on your computer and paste the APIGateway invoke URL.
    • Create a curl command with you API Gateway + ?message_id=message_id. It should look like this: “https://txxxxxx0.execute-api.us-west-2.amazonaws.com/call?message_id=020100000xx3xxoo-clvxxxxf-8cq2-he0a-ji96-59nr4tgva0g0-000000”
    • Copy the full curl command in your browser and enter.
    • The results should look like this (MacOS, Chrome):

By following these deployment and testing steps, you’ll have a functioning solution for tracking Pinpoint message delivery status using Amazon Pinpoint, Kinesis Fire Hose, DynamoDB and CloudWatch.

Clean Up

To help prevent unwanted charges to your AWS account, you can delete the AWS resources that you used for this walkthrough.

To delete the stack follow these following instructions:

Open the AWS CloudFormation console.

  • In the AWS CloudFormation console dashboard, select the stack you created (pinpoint-yourprojectname-monitoring).
  • On the Actions menu, choose Delete Stack.
  • When you are prompted to confirm, choose Yes, Delete.
  • Wait for DELETE_COMPLETE to appear in the Status column for the stack.

Next steps

The solution on this blog provides you an API endpoint to query messages’ status. The next step is to store and analyze the raw data based on your business’s requirements. The Amazon Kinesis Firehose used in this blog can stream the Pinpoint events to an AWS database or object storage like Amazon S3. Once the data is stored, you can catalogue them using AWS Glue, query them via SQL using Amazon Athena and create custom dashboards using Amazon QuickSight, which is a cloud-native, serverless, business intelligence (BI) with native machine learning (ML) integrations.

Conclusion

The integration of AWS services such as Kinesis, Lambda, DynamoDB, and API Gateway with Amazon Pinpoint transforms your ability to connect with customers through precise event data retrieval and analysis. This solution provides a stream of real-time data, versatile storage options, and a secure method for accessing detailed information, all of which are critical for optimizing your communication strategies.

By leveraging these insights, you can fine-tune your email and SMS campaigns for maximum impact, ensuring every message counts in the broader narrative of customer engagement and satisfaction. Harness the power of AWS and Amazon Pinpoint to not just reach out but truly connect with your audience, elevating your customer relationships to new heights.

Considerations/Troubleshooting

When implementing a solution involving AWS Lambda, Kinesis Data Streams, Kinesis Data Firehose, and DynamoDB, several key considerations should be considered:

  • Scalability and Performance: Assess the scalability needs of your system. Lambda functions scale automatically, but it’s important to configure concurrency settings and memory allocation based on expected load. Similarly, for Kinesis Streams and Firehose, consider the volume of data and the throughput rate. For DynamoDB, ensure that the table’s read and write capacity settings align with your data processing requirements.
  • Error Handling and Retries: Implement robust error handling within the Lambda functions to manage processing failures. Kinesis Data Streams and Firehose have different retry behaviors and mechanisms. Understand and configure these settings to handle failed data processing attempts effectively. In DynamoDB, consider the use of conditional writes to handle potential data inconsistencies.
  • Security and IAM Permissions: Secure your AWS resources by adhering to the principle of least privilege. Define IAM roles and policies that grant the Lambda function only the necessary permissions to interact with Kinesis and DynamoDB. Ensure that data in transit and at rest is encrypted as required, using AWS KMS or other encryption mechanisms.
  • Monitoring and Logging: Utilize AWS CloudWatch for monitoring and logging the performance and execution of Lambda functions, as well as Kinesis and DynamoDB operations. Set up alerts for any anomalies or thresholds that indicate issues in data processing or performance bottlenecks.

About the Authors

Brijesh Pati

Brijesh Pati

Brijesh Pati is an Enterprise Solutions Architect at AWS. His primary focus is helping enterprise customers adopt cloud technologies for their workloads. He has a background in application development and enterprise architecture and has worked with customers from various industries such as sports, finance, energy and professional services. His interests include serverless architectures and AI/ML.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Anshika Singh

Anshika Singh

Anshika Singh is an Associate Solutions Architect at AWS specializing in building for GenAI applications. She helps enable customers to use the cloud through the use of code samples and starter projects.

Magic Cloud Networking simplifies security, connectivity, and management of public clouds

Post Syndicated from Steve Welham original https://blog.cloudflare.com/introducing-magic-cloud-networking


Today we are excited to announce Magic Cloud Networking, supercharged by Cloudflare’s recent acquisition of Nefeli Networks’ innovative technology. These new capabilities to visualize and automate cloud networks will give our customers secure, easy, and seamless connection to public cloud environments.

Public clouds offer organizations a scalable and on-demand IT infrastructure without the overhead and expense of running their own datacenter. Cloud networking is foundational to applications that have been migrated to the cloud, but is difficult to manage without automation software, especially when operating at scale across multiple cloud accounts. Magic Cloud Networking uses familiar concepts to provide a single interface that controls and unifies multiple cloud providers’ native network capabilities to create reliable, cost-effective, and secure cloud networks.

Nefeli’s approach to multi-cloud networking solves the problem of building and operating end-to-end networks within and across public clouds, allowing organizations to securely leverage applications spanning any combination of internal and external resources. Adding Nefeli’s technology will make it easier than ever for our customers to connect and protect their users, private networks and applications.

Why is cloud networking difficult?

Compared with a traditional on-premises data center network, cloud networking promises simplicity:

  • Much of the complexity of physical networking is abstracted away from users because the physical and ethernet layers are not part of the network service exposed by the cloud provider.
  • There are fewer control plane protocols; instead, the cloud providers deliver a simplified software-defined network (SDN) that is fully programmable via API.
  • There is capacity — from zero up to very large — available instantly and on-demand, only charging for what you use.

However, that promise has not yet been fully realized. Our customers have described several reasons cloud networking is difficult:

  • Poor end-to-end visibility: Cloud network visibility tools are difficult to use and silos exist even within single cloud providers that impede end-to-end monitoring and troubleshooting.
  • Faster pace: Traditional IT management approaches clash with the promise of the cloud: instant deployment available on-demand. Familiar ClickOps and CLI-driven procedures must be replaced by automation to meet the needs of the business.
  • Different technology: Established network architectures in on-premises environments do not seamlessly transition to a public cloud. The missing ethernet layer and advanced control plane protocols were critical in many network designs.
  • New cost models: The dynamic pay-as-you-go usage-based cost models of the public clouds are not compatible with established approaches built around fixed cost circuits and 5-year depreciation. Network solutions are often architected with financial constraints, and accordingly, different architectural approaches are sensible in the cloud.
  • New security risks: Securing public clouds with true zero trust and least-privilege demands mature operating processes and automation, and familiarity with cloud-specific policies and IAM controls.
  • Multi-vendor: Oftentimes enterprise networks have used single-vendor sourcing to facilitate interoperability, operational efficiency, and targeted hiring and training. Operating a network that extends beyond a single cloud, into other clouds or on-premises environments, is a multi-vendor scenario.

Nefeli considered all these problems and the tensions between different customer perspectives to identify where the problem should be solved.

Trains, planes, and automation

Consider a train system. To operate effectively it has three key layers:

  • tracks and trains
  • electronic signals
  • a company to manage the system and sell tickets.

A train system with good tracks, trains, and signals could still be operating below its full potential because its agents are unable to keep up with passenger demand. The result is that passengers cannot plan itineraries or purchase tickets.

The train company eliminates bottlenecks in process flow by simplifying the schedules, simplifying the pricing, providing agents with better booking systems, and installing automated ticket machines. Now the same fast and reliable infrastructure of tracks, trains, and signals can be used to its full potential.

Solve the right problem

In networking, there are an analogous set of three layers, called the networking planes:

  • Data Plane: the network paths that transport data (in the form of packets) from source to destination.
  • Control Plane: protocols and logic that change how packets are steered across the data plane.
  • Management Plane: the configuration and monitoring interfaces for the data plane and control plane.

In public cloud networks, these layers map to:

  • Cloud Data Plane: The underlying cables and devices are exposed to users as the Virtual Private Cloud (VPC) or Virtual Network (VNet) service that includes subnets, routing tables, security groups/ACLs and additional services such as load-balancers and VPN gateways.
  • Cloud Control Plane: In place of distributed protocols, the cloud control plane is a software defined network (SDN) that, for example, programs static route tables. (There is limited use of traditional control plane protocols, such as BGP to interface with external networks and ARP to interface with VMs.)
  • Cloud Management Plane: An administrative interface with a UI and API which allows the admin to fully configure the data and control planes. It also provides a variety of monitoring and logging capabilities that can be enabled and integrated with 3rd party systems.

Like our train example, most of the problems that our customers experience with cloud networking are in the third layer: the management plane.

Nefeli simplifies, unifies, and automates cloud network management and operations.

Avoid cost and complexity

One common approach to tackle management problems in cloud networks is introducing Virtual Network Functions (VNFs), which are virtual machines (VMs) that do packet forwarding, in place of native cloud data plane constructs. Some VNFs are routers, firewalls, or load-balancers ported from a traditional network vendor’s hardware appliances, while others are software-based proxies often built on open-source projects like NGINX or Envoy. Because VNFs mimic their physical counterparts, IT teams could continue using familiar management tooling, but VNFs have downsides:

  • VMs do not have custom network silicon and so instead rely on raw compute power. The VM is sized for the peak anticipated load and then typically runs 24x7x365. This drives a high cost of compute regardless of the actual utilization.
  • High-availability (HA) relies on fragile, costly, and complex network configuration.
  • Service insertion — the configuration to put a VNF into the packet flow — often forces packet paths that incur additional bandwidth charges.
  • VNFs are typically licensed similarly to their on-premises counterparts and are expensive.
  • VNFs lock in the enterprise and potentially exclude them benefitting from improvements in the cloud’s native data plane offerings.

For these reasons, enterprises are turning away from VNF-based solutions and increasingly looking to rely on the native network capabilities of their cloud service providers. The built-in public cloud networking is elastic, performant, robust, and priced on usage, with high-availability options integrated and backed by the cloud provider’s service level agreement.

In our train example, the tracks and trains are good. Likewise, the cloud network data plane is highly capable. Changing the data plane to solve management plane problems is the wrong approach. To make this work at scale, organizations need a solution that works together with the native network capabilities of cloud service providers.

Nefeli leverages native cloud data plane constructs rather than third party VNFs.

Introducing Magic Cloud Networking

The Nefeli team has joined Cloudflare to integrate cloud network management functionality with Cloudflare One. This capability is called Magic Cloud Networking and with it, enterprises can use the Cloudflare dashboard and API to manage their public cloud networks and connect with Cloudflare One.

End-to-end

Just as train providers are focused only on completing train journeys in their own network, cloud service providers deliver network connectivity and tools within a single cloud account. Many large enterprises have hundreds of cloud accounts across multiple cloud providers. In an end-to-end network this creates disconnected networking silos which introduce operational inefficiencies and risk.

Imagine you are trying to organize a train journey across Europe, and no single train company serves both your origin and destination. You know they all offer the same basic service: a seat on a train. However, your trip is difficult to arrange because it involves multiple trains operated by different companies with their own schedules and ticketing rates, all in different languages!

Magic Cloud Networking is like an online travel agent that aggregates multiple transportation options, books multiple tickets, facilitates changes after booking, and then delivers travel status updates.

Through the Cloudflare dashboard, you can discover all of your network resources across accounts and cloud providers and visualize your end-to-end network in a single interface. Once Magic Cloud Networking discovers your networks, you can build a scalable network through a fully automated and simple workflow.

Resource inventory shows all configuration in a single and responsive UI

Taming per-cloud complexity

Public clouds are used to deliver applications and services. Each cloud provider offers a composable stack of modular building blocks (resources) that start with the foundation of a billing account and then add on security controls. The next foundational layer, for server-based applications, is VPC networking. Additional resources are built on the VPC network foundation until you have compute, storage, and network infrastructure to host the enterprise application and data. Even relatively simple architectures can be composed of hundreds of resources.

The trouble is, these resources expose abstractions that are different from the building blocks you would use to build a service on prem, the abstractions differ between cloud providers, and they form a web of dependencies with complex rules about how configuration changes are made (rules which differ between resource types and cloud providers). For example, say I create 100 VMs, and connect them to an IP network. Can I make changes to the IP network while the VMs are using the network? The answer: it depends.

Magic Cloud Networking handles these differences and complexities for you. It configures native cloud constructs such as VPN gateways, routes, and security groups to securely connect your cloud VPC network to Cloudflare One without having to learn each cloud’s incantations for creating VPN connections and hubs.

Continuous, coordinated automation

Returning to our train system example, what if the railway maintenance staff find a dangerous fault on the railroad track? They manually set the signal to a stop light to prevent any oncoming trains using the faulty section of track. Then, what if, by unfortunate coincidence, the scheduling office is changing the signal schedule, and they set the signals remotely which clears the safety measure made by the maintenance crew? Now there is a problem that no one knows about and the root cause is that multiple authorities can change the signals via different interfaces without coordination.

The same problem exists in cloud networks: configuration changes are made by different teams using different automation and configuration interfaces across a spectrum of roles such as billing, support, security, networking, firewalls, database, and application development.

Once your network is deployed, Magic Cloud Networking monitors its configuration and health, enabling you to be confident that the security and connectivity you put in place yesterday is still in place today. It tracks the cloud resources it is responsible for, automatically reverting drift if they are changed out-of-band, while allowing you to manage other resources, like storage buckets and application servers, with other automation tools. And, as you change your network, Cloudflare takes care of route management, injecting and withdrawing routes globally across Cloudflare and all connected cloud provider networks.

Magic Cloud Networking is fully programmable via API, and can be integrated into existing automation toolchains.

The interface warns when cloud network infrastructure drifts from intent

Ready to start conquering cloud networking?

We are thrilled to introduce Magic Cloud Networking as another pivotal step to fulfilling the promise of the Connectivity Cloud. This marks our initial stride in empowering customers to seamlessly integrate Cloudflare with their public clouds to get securely connected, stay securely connected, and gain flexibility and cost savings as they go.

Join us on this journey for early access: learn more and sign up here.

Broadcom VMware Ends Free VMware vSphere Hypervisor Closing an Era

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/broadcom-vmware-ends-free-vmware-vsphere-hypervisor-closing-an-era/

Broadcom ended the free VMware vSphere Hypervisor era relegating VMware admins to a path similar to mainframe admins

The post Broadcom VMware Ends Free VMware vSphere Hypervisor Closing an Era appeared first on ServeTheHome.

Using one-click unsubscribe with Amazon SES

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/using-one-click-unsubscribe-with-amazon-ses/

Gmail and Yahoo have announced new requirements for bulk senders that take effect in February 2024. The requirements aim to reduce delivery of malicious or unwanted email to the users of these mailbox providers. We recommend that Amazon SES senders who operate outside of the SES sandbox assume these bulk sender requirements apply to them.

Gmail’s FAQ and Yahoo’s FAQ both clarify that the one-click unsubscribe requirement will not be enforced until June 2024 as long as the bulk sender has a functional unsubscribe link clearly visible in the footer of each message.

This blog presents a reference architecture for Amazon SES senders who independently manage email subscriptions outside of Amazon SES. Alternatively, Amazon SES senders can employ our native subscription management capability as part of their compliance with the Gmail and Yahoo bulk sender requirements.  Note that the scope of Gmail and Yahoo’s bulk sender requirements extends beyond enabling an easy unsubscribe method.  Read our blogs on email authentication and managing spam complaints for more information that will help you successfully operate as a bulk sender with Amazon SES.

Email headers contain metadata that describes the content, sender, relay path, destination, and other elements of an email. The bulk sender easy subscription requirement references use of the List-Unsubscribe email header (RFC2369) and List-Unsubscribe-Post email header (RFC8058). The order of the headers should be first the List-Unsubscribe followed by the List-Unsubscribe-Post.

  • List-Unsubscribe: <https://nutrition.co/?address=x&topic=x>, <mailto:unsubscribe@ nutrition.co?subject=TopicUnsubscribe>
  • List-Unsubscribe-Post: List-Unsubscribe=One-Click

These headers enable email clients and inbox providers to display an unsubscribe link at the top of the email if they support it. This could take the form of a menu item, push button, or another user interface element to simplify the user experience – see the Gmail client screenshot below.

gmail-inbox

Unsubscribing can take place from the email footer by clicking on a hyperlink, and/or from an unsubscribe link that mailbox providers render. These different unsubscribe methods can be custom-built or provided by Amazon SES.

  • Unsubscribe method footer: An unsubscribe link in the email footer, which redirects recipients to a landing page, where they can unsubscribe or edit their communication preferences.
  • Unsubscribe method header: A hyperlink that is rendered by the mailbox provider based on the List-Unsubscribe email header. Recipients can use this link to unsubscribe from that sender.
  • Amazon SES unsubscribe method: The Amazon SES subscription management feature, which provides subscription management via the List-Unsubscribe header and ListManagementOptions footer links.
  • Custom-built unsubscribe method: A custom-built unsubscribe link in the email footer and manually added List-Unsubscribe header.

The table below lists all unsubscribe method combinations, indicating if they are custom-built or provided by Amazon SES and whether they comply with the easy unsubscription requirement from Google and Yahoo.

Unsubscribe method Amazon SES or custom-built Complies with Gmail & Yahoo
Footer & Header Amazon SES Yes
Footer & Header Custom Yes
Header Custom Yes
Footer Custom Partial

Failing to comply with the easy unsubscription requirement mailbox providers such as Gmail and Yahoo will start rejecting non-compliant emails.

Note: Gmail might not show the easy unsubscribe link. This might happen because Gmail shows the link if they trust that the sender is honoring the unsubscribe requests and not attempting to track recipients. We recommend senders continue to provide the unsubscribe link in an easy to find location of the body of the message.

Implementing the unsubscribe header has many benefits for you:

  • Reduces spam complaint rate: Email recipients will click on “Report as SPAM” if they find it difficult to unsubscribe. A high spam complaint rate makes mailbox providers more likely to block your sending. Making unsubscribe easier can improve deliverability.
  • It can increase the trust in your brand: The fact that it is easy for recipients to unsubscribe could be seen as evidence that the content is valuable enough that the company believes people will want to stay subscribed.
  • Reduces issues with false suppression: Senders that rely solely on account-level suppression lists could suppress all email sending to an address even though the recipient may wish to receive other types of email from the account. Offering an easy unsubscribe method allows recipients to indicate which type of email they would like to receive and not receive based on topic or category.

There are two types of list-unsubscribe options:

  • Mailto: unsubscribe requests come in the form of an email sent from the mailbox provider to the email address specified on the List-Unsubscribe header. The process of managing unsubscribe emails can be automated with SES inbound.
  • URL unsubscribe link: redirects recipients to an unsubscribe landing page, from where they can edit further their communication preferences. Adding the List-Unsubscribe-Post email header, senders can provide recipients with one-click unsubscribe experience, which doesn’t require them to visit a landing page.

The mailto option is supported by many mailbox providers and it’s recommended to include it in addition to the URL in the List-Unsubscribe email header and the unsubscribe link in the email footer.

One-click unsubscribe for Amazon SES

This section guides you on how to use Amazon SES V2 SendEmail API operation for email sending and describes how to use other AWS services to effectively manage each kind of unsubscribe request.

The architecture covers both easy unsubscribe options, mailto and URL. This is because not all mailbox providers support the List-Unsubscribe-Post header. The architecture, assumes that Amazon SES has email receiving enabled for the unsubscribe email address used in the List-Unsubscribe mailto header and your recipient preferences can be updated via an API.

The reference architecture diagram illustrates the AWS services used and how they interact with each other to process a recipient’s unsubscribe request:

  • AWS KMS: is a managed service that makes it easy for you to create and control the cryptographic keys that are used to protect your data.
  • Amazon API Gateway: Is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
  • AWS Lambda: Compute service that runs your code in response to events and automatically manages the compute resources.

The first part of the process is described in detail below:

email-sending-flow

  1. Compliant emails should include the List-Unsubscribe and List-Unsubscribe-Post headers. This can be achieved with the Amazon SES SendEmail V2 API operation. Using MIME standard, build a MIME message containing the headers, subject and body. The MIME message will be in the SES V2 SendEmail API request body under Content => Raw field – see code example below. Amazon SES is planning to extend the SendEmail V2 API to natively support unsubscribe email headers. The unsubscribe email address and URL contain the recipient’s email address and email subject parameters, which are encrypted using AWS Key Management Service. These parameters are used later on to identify and unsubscribe the recipient from a specific topic.
    1. The email domain used to send emails needs to be first verified successfully – see here how to create and verify identities in SES.
    2. Gmail uses the Friendly From value to populate the unsubscribe pop-up message. Friendly From is the part of the From header that is displayed to the recipient (not the email address) “To stop getting messages like this one, go to the <Friendly From> website to unsubscribe. Learn more.”. If you see Unknown or experience other issues, ensure that the From header of your messages conforms to RFC5322.
      
      	msg = MIMEMultipart()
      	msg.add_header('List-Unsubscribe','<https://nutrition.co/?address=x&topic=x>, <mailto: [email protected]?subject=TopicUnsubscribe>')
      	msg.add_header('List-Unsubscribe-Post','List-Unsubscribe=One-Click')
      	msg.attach(MIMEText("Welcome to Nutrition.co", 'plain')) 
      	msg['Subject'] = "Welcome to Nutrition.co"
      
      	response = sesv2.send_email(
      	  FromEmailAddress='Nutrition.co <[email protected]>',
      	  Destination={'ToAddresses': ['[email protected]']},
      	  Content={
      		  'Raw': {
      			  'Data': msg.as_string()
      		  },
      	  },
      	  ConfigurationSetName='ConfigSet'
      	)
    3. Amazon Pinpoint senders need to use Custom channel instead of Amazon Pinpoint’s native email channel. Custom channel gives the flexibility to invoke an AWS Lambda function and execute custom code such as calling Amazon Pinpoint’s send_messages API operation. Using Amazon Pinpoint’s send_messages API operation you can specify an endpoint as the recipient and add the email content and the List-Unsubscribe and List-Unsubscribe-Post headers in a MIME message under the RawEmail => Data field – see below a code example:
      	msg = MIMEMultipart()
      	msg.add_header('List-Unsubscribe','<https://nutrition.co/?address=x&topic=x>, <mailto: [email protected]?subject=TopicUnsubscribe>')
      	msg.add_header('List-Unsubscribe-Post','List-Unsubscribe=One-Click')
      	msg.attach(MIMEText("Welcome to Nutrition.co", 'plain')) 
      	msg['Subject'] = "Welcome to Nutrition.co"
      
      	endpoint_id = "endpoint_id"
      	application_id = "application_id"
      
      	response = pinpoint.send_messages(
      	ApplicationId = application_id,
      	MessageRequest = {
      		'Endpoints': {
      			endpoint_id: {}
      		},
      		'MessageConfiguration': {
      		'EmailMessage': {
      			'FromAddress': 'Nutrition.co <[email protected]>',
      			'RawEmail': {
      				'Data': msg.as_string()
      			}
      		}
      	  }
      	})
  2. The email recipients whose mailbox provider supports List-Unsubscribe, such as Gmail & Yahoo, will see an Unsubscribe hyperlink next to the sender details as shown in the screenshot below.

gmail-inbox

So far, we have talked about how to craft and employ the headers for presenting mail recipients with an easy unsubscribe option.  In the following sections, we’ll walk through the two options for sending the unsubscribe request back to the sender.

The first option uses only the List-Unsubscribe header and only specifies the mailto email address to receive unsubscribe requests. The second option uses both the List-Unsubscribe and the List-Unsubscribe-Post headers. The unsubscribe requests are made with a POST API call to an endpoint provided in the List-Unsubscribe header.

When the recipient clicks on the Unsubscribe call to action next to the sender’s information, a pop-up appears asking for final confirmation using either option – see screenshot below.

unsubscribe-pop-up

Scenario – List-Unsubscribe

list-unsubscribe-scenario

  1. The recipient clicks on the Unsubscribe call to action next to the sender’s details and again on Unsubscribe on the pop-up.
  2. The mailbox provider sends an email to the email address specified in the header List-Unsubscribe => mailto. Amazon SES can be configured to receive emails for the unsubscribe email address, the Amazon SES receipt rule Invoke Lambda function action.
  3. An AWS Lambda function gets invoked. The payload contains all email headers and omits the email body as well as any attachments. The AWS Lambda function uses the AWS KMS key to decrypt the email subject, which contains the topic the recipient wants to unsubscribe from. Depending where your recipient preferences are stored, you can expand the AWS Lambda function code to update the recipients’ communication preferences.

Scenario – List-Unsubscribe & List-Unsubscribe-Post

list-unsubscribe-post-scenario

  1. The recipient clicks on the Unsubscribe call to action next to the sender’s details and again on Unsubscribe on the pop-up.
  2. The mailbox provider performs a POST API call to the URL provided in the List-Unsubscribe header. In this architecture, the URL is an Amazon API Gateway endpoint with an AWS Lambda integration.
  3. An AWS Lambda function gets invoked, which uses the AWS KMS key to decrypt the email address and topic stored in the URL parameters. Depending where your recipient preferences are stored, you can expand the AWS Lambda function code to update the recipients’ communication preferences. The code in the AWS Lambda function serves two purposes 1) processing a POST request to unsubscribe the recipient and 2) processing a GET request to redirect the recipient to page on your website (Gmail specific). Use a micro web framework like Flask to process unsubscribe requests and accordingly redirect recipients to a page of your website.

In Gmail, to view the Go to website call to action, recipients need to first Unsubscribe and then and then click on Unsubscribe again – see diagram below.

unsubscribe-flow-gmail

Conclusion

In this blog you learned how to configure Amazon SES to manage One-click unsubscribe requests when not using SES’s subscription management feature. The reference architecture shows how to structure and add the List-Unsubscribe and List-Unsubscribe-Post email headers when sending emails as well as how to manage unsubscribe requests generated from these email headers respectively. In addition to the List-Unsubscribe and List-Unsubscribe-Post email headers, we recommend (continue) using the footer unsubscribe link.

Easy unsubscribe benefits both the sender and recipient. It is one of the Gmail and Yahoo’s bulk sender requirements announced back in October 2023. The one-click unsubscribe requirement will not be enforced until June 2024 as long as the bulk sender has a functional unsubscribe link clearly visible in the footer of each message.

How to Build a Compliant SMS Opt-In Process With Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-build-a-compliant-sms-opt-in-process-with-amazon-pinpoint/

SMS messaging is a great way to stay in touch with your customers and send them timely, relevant messages. However, it’s always required to get their permission before you start sending them texts. This is known as opting in.

There are a few different ways to opt-in users to your SMS program. One common method is to have them sign up on your website or in your app. You can also collect opt-ins at in-person events or via phone call through your customer service team; though it’s not limited to just those options.

No matter which method you choose, it’s important to make sure that your opt-in process is clear, concise, and compliant with all applicable local laws and regulations in the countries that you are sending to. Here are some best practices:

  • Get explicit consent. Explicit consent is the intentional action taken by a end-user to request a specific message from your service.
  • Provide clear instructions. Tell users how to opt-in, what they are opting into, and how to opt-out of your program. Be sure to include your contact information at the opt-in location in case they have any questions or concerns.
  • Give users the option to choose what kind of messages they want to receive. For example, you might allow them to opt-in to OTP/2FA messages, shipping notifications, or both.
  • Respect users’ privacy. Never sell or share users’ phone numbers with third parties without their permission. 3rd party data sharing is generally considered a prohibited practice by mobile carriers and violates privacy regulations in many countries.
  • Make it easy to opt-out. Users should be able to opt-out of your program at any time by replying with a simple text message, such as “STOP.” See additional relevent documentation related to this: Opting out. Self-managed opt-outs.

The above will help you build a strong audience of engaged subscribers who want to hear from you and improve your chances in successfully registering for a dedicated number. By following these best practices, you can ensure that your SMS opt-in process is compliant and effective.

What carriers require for a compliant opt-in workflow and call-to-action

The primary purpose of the opt-in workflow is to demonstrate that the end-user explicitly consents to receive text messages and understands the nature of the program. Your application is being reviewed by a 3rd party reviewer and sometimes multiple 3rd party reviewers for a single registration, so make sure to provide clear and thorough information about how your end-users opt-in to your SMS service and any associated fees or charges. If the reviewer cannot determine how your opt-in process works or if it is not compliant then your application will be denied and returned. It is important to note that Amazon does not review or approve your use cases and that it’s a telecom industry standard in most countries for 3rd parties to review and approve your use case prior to sending.

Note: If you have a use case that is internal to your business, you are still required to demonstrate explicit opt-in consent from the recipients. There are no exceptions to having an opt-in workflow and explicit consent is always required.

If your opt-in process requires a login, is not yet public, is a verbal opt-in, or if it occurs on printed forms or fliers then make sure to thoroughly document how this process is completed by the end-user receiving messages — remember, these are 3rd party reviewers and if they’re unable to access where your end-users opt-in, they will require thorough information via other means like text or screenshots. Provide a screenshot of the Call to Action (CTA) in such cases. If the consent is being asked for and supplied verbally, as in a contact center situation, make sure to provide the verbal scripts to ensure the entire CTA is shown. Host any screenshots on a publicly accessible website (like S3, OneDrive, or Google Drive) and provide the URL when you submit (NOTE: toll-free number registration process supports attachments and do not require a public URL to be included). Regardless of the medium used to collect end-user information (e.g., webform, point of sale, fliers, or verbal opt-ins), the requirements are the same. In the case of online and printed materials, they would be shown as text to the end-users. In the case of verbal opt-ins (i.e., on the phone), the information below would be verbally read to the end-user.

Call-to-action/opt-in requirements

The following items are the minimum that must be presented to an end-user at the time of opt-in to ensure your SMS program is compliant:

  • Program (brand) name
  • Message frequency disclosure. (example: “Message frequency varies” or “One message per login”)
  • Customer care contact information (example: “Text HELP or call 1-800-111-2222 for support.”)
  • Opt-out information (example: “Text STOP to opt-out of future messages.”)
  • Include “Message and data rates may apply” disclosure.
  • Link to a publicly accessible Terms & Conditions page
  • Link to a publicly accessible Privacy Policy page

**Now lets break the above bullet points down into more detail:


Program, service, brand name

All SMS originator types that require registration must disclose the program name, product description, or both in service messages, on the call-to-action, and in the terms and conditions. The program name is the sponsor of the messaging program, often the brand name or company name associated with the sending use case. The product description describes the product advertised by the program.

Publicly accessible terms & conditions page

The terms should be live and publicly accessible. For verbal scripts, a URL must be read off to the end-user enrolling in the SMS program, or the comprehensive terms must be directly included in the script. You should provide a compliant screenshot, link, or mockup of the SMS Terms of Service in the registration submission.

Below is a copy of the boilerplate terms of service that cover minimum requirements from the carriers:

  1. {Program name}
  2. {Insert program description here; this is simply a brief description of the kinds of messages users can expect to receive when they opt-in.}
  3. You can cancel the SMS service at any time. Just text “STOP” to the short code. After you send the SMS message “STOP” to us, we will send you an SMS message to confirm that you have been unsubscribed. After this, you will no longer receive SMS messages from us. If you want to join again, just sign up as you did the first time and we will start sending SMS messages to you again.
  4. If you are experiencing issues with the messaging program you can reply with the keyword HELP for more assistance, or you can get help directly at {support email address or toll-free number}.
  5. Carriers are not liable for delayed or undelivered messages
  6. As always, message and data rates may apply for any messages sent to you from us and to us from you. You will receive {message frequency}. If you have any questions about your text plan or data plan, it is best to contact your wireless provider.
  7. If you have any questions regarding privacy, please read our privacy policy: {link to privacy policy}

Publicly accessible privacy policy page

Message Senders are responsible for protecting the privacy of Consumers’ information and must comply with applicable privacy law. Message Senders should maintain a privacy policy for all programs and make it accessible from the initial call-to-action. The privacy policy should be labeled clearly and all cases, terms and conditions and privacy policy disclosures must provide up-to-date, accurate information about program details and functionality. For verbal scripts, a URL must be read off to the end-user enrolling in the SMS program, or the comprehensive terms must be directly included in the script.

One of the key items carriers look for in a Privacy Policy is the sharing of end-user information with third-parties. If your privacy policy mentions data sharing or selling to non-affiliated third parties, there is a concern that customer data will be shared with third parties for marketing purposes.

Express consent is required for SMS; therefore, sharing data is prohibited. Privacy policies must specify that this data sharing excludes SMS opt-in data and consent. Privacy policies can be updated (or draft versions provided) where the practice of sharing personal data to third parties is expressly omitted from the number registration.

Example: “The above excludes text messaging originator opt-in data and consent; this information will not be shared with any third parties.”

Message frequency disclosure

The message frequency disclosure provides end-users an indication of how often they’ll receive messages from you. For example, a recurring messaging program might say “one message per week.” A one-time password or multi-factor authentication use case might say “message frequency varies” or “one message per login attempt”.

Customer care contact information

Customer care contact information must be clear and readily available to help Consumers understand program details as well as their status with the program. Customer care information should result in Consumers receiving help.

Numbers should always respond to customer care requests, regardless of whether the requestor is subscribed to the program. At a minimum, Message Senders must respond to messages containing the HELP keyword with the program name and further information about how to contact the Message Sender. SMS programs should promote customer care contact instructions at program opt-in and at regular intervals in content or service messages, at least once per month.

Example: “For more information, text ‘HELP’ or call 1-800-123-1234.”

Opt-Out Information

Opt-out mechanisms facilitate Consumer choice to terminate communications from text messaging programs. Message Senders should acknowledge and respect Consumers’ opt-out requests consistent with the following guidelines:

  • Message Senders should ensure that Consumers have the ability to opt-out at any time
  • Message Senders should support multiple mechanisms of opt-out, including: phone call, email, or text
  • Message Senders should acknowledge and honor all Consumer opt-out requests by sending one final opt-out confirmation message to notify the Consumer that they have opted-out successfully. No further messages should be sent following the confirmation message.

Message Senders should include opt-out information in the call-to-action, terms and conditions, and opt-in confirmation.

If a 2FA/OTP program requires end-users to opt-in and request an OTP from the same CTA, and it is compliant with all applicable regulations, then the sender does not need to explicitly opt-out that number if the user texts “STOP” to the business’s number. However, the sender must still respond with a compliant opt-out response.

See the following Amazon Pinpoint blog post on How to Manage SMS Opt-Outs with Amazon Pinpoint

“Message and data rates may apply” disclosure

All SMS programs must display or must be read out loud (if a verbal opt-in) the disclosure verbatim: “Message and data rates may apply”. By requiring the disclosure, US mobile carriers are helping to ensure that consumers are aware of the potential costs of sending and receiving text messages, and that they have consented to receive those messages before they are sent.


SMS Opt-Ins for Independent Software Vendors (ISVs)

Definitions

In this section, we’ll outline the terms we use, to help better explain each party, and the requirements.

ISV: ISVs are positioned between Amazon Pinpoint and the ISV’s end-business customers. While they may operate differently, and/or offer different services, their requirements for SMS program registrations are largely the same.

End Business: The End Business is how we refer to your ISV customers. This is generally the entity that creates the messaging content, distributes it through your platform, and interacts with their end-users (message recipients).

Note: In some rare cases, an ISV platform can be considered the end business if they control content via templates, and collect and manage opt-in in their entirety — meaning the ISV information directly will be used for the registration and will be branded as such in the text messages. If you are unsure, we recommend including the information for the entity (your customer) that is engaging with the opted-in handset with registration. ISVs who don’t include this information (if it is required) risk their verification request being rejected.

End-User: The message recipient is considered the end-user. The person with the handset where messages terminate. As an independent software vendor (ISV), you need to comply with all applicable laws and regulations when it comes to SMS opt-ins. This means that your end business(es) need to get explicit consent from their end-users before text messages start being sent and give end-users the option to opt-out of their program at any time. You also need to provide them with a registered and approved phone number to send their SMS messages to ensure that they are delivered reliably and not flagged as spam.

When does an ISV have to submit each end business?

SMS program registrations requires end-user business information, not ISV information. This means the ISV needs to provide a mechanism for their end businesses to provide their information to be submitted for registration. For ISVs or aggregators who provide messaging services to businesses, it’s expected that the information provided represents the entity (your customer) that is sending messages to the opted-in handset.

NOTE: Amazon uses this information in accordance with all applicable obligations, and only to verify the end-user is a legitimate business. Amazon will not contact the end-business user with the information provided.

Submissions that are missing information or are populated with ISV/aggregator information may be rejected. Exceptions may apply when the use case clearly showcases that the ISV manages opt-in mechanisms, is the sole message content creator, and the messages clearly come from the ISV, not their end businesses. For example, if the ISV owns a web application that requires their end-customers to enroll into OTP.

If you are unsure, we recommend including the information for the entity (your customer) that is engaging with the opted-in handset with registration. ISVs who don’t include this information (if it is required) risk their verification request being rejected.

In conclusion

Getting user consent through a compliant opt-in process is crucial for any SMS messaging program. Key elements include clearly disclosing the program details, providing easy opt-out methods, having accessible terms of service and privacy policies, and adhering to all applicable regulations. For ISVs enabling businesses to send SMS messages, it’s important they provide a way for each end business to submit their own information for registration and comply with the requirements. By following SMS best practices around opt-ins, businesses can build trust with subscribers and ensure deliverability of their text messaging campaigns.

How to Migrate Your SMS Program to Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-migrate-your-sms-program-to-amazon-pinpoint/

How to Migrate Your SMS Program to Amazon Pinpoint

In the fast-paced realm of communication, where every second counts and attention spans are shorter than ever, the choice of channels that you use to deliver your message to your recipients is critical. While we often find ourselves swept away by the allure of flashy social media platforms and sleek email interfaces, it’s the unassuming text message, or SMS, that continually proves to be one of the most effective options. According to Statista, there over 5 billion mobile internet users globally, amounting to over 60% of the earth’s population of ~8 billion. SMS obviously provides an expansive reach that can help businesses connect with a diverse audience but in order to do that at scale, you need to use a service like Amazon Pinpoint that facilitates the ability to send SMS to over 240 countries and/or regions around the world. If you have a current SMS provider and are considering Pinpoint SMS for its global reach, scalability, cost effective pricing, and demonstrably high deliverability, this guide will walk you through how to migrate from your current provider.

There are several common reasons our customers give us when considering a migration. Don’t worry if your situation doesn’t fit into a neat box, we help customers navigate the dynamic landscape of SMS that is constantly evolving. Let’s dive deep into each of the below to highlight some common things we hear from our customers.

  • My current provider doesn’t deliver to countries I want to send to
  • My current provider is more expensive than Pinpoint pricing
    • Our pricing is available on the public pricing page here. Each country has it’s own cost associated with it so enter in the countries you would like to see pricing for. These prices are per message sent so if you are planning on sending to multiple countries factor in the types of messages that you will want to send as well as the countries. If your use case includes 2 way communication make sure to factor the number of inbound messages you expect into your calculations.
      • NOTE: Depending on the language the available characters per message varies, which can affect your calculations on cost. See here for an explanation
  • My current provider doesn’t have features that Pinpoint has
    • Among many other features Pinpoint has the ability to send over multiple channels, including: SMS, Email, Push/In-App, Voice, Over the Top (OTT) services such as WhatsApp, as well as interact with third-party APIs giving you the flexibility to send to many other channels.
  • My current provider is not native to AWS
    • Pinpoint, being native to the AWS Cloud, boasts the capability to seamlessly integrate with a wide array of services, including AI/ML offerings such as Amazon Personalize, Amazon Bedrock, and Amazon SageMaker, among others. This means you can leverage various AWS services to create innovative solutions that enhance and optimize the communications sent through Pinpoint.
  • My current provider does not have good deliverability
    • Price is not the only factor to consider when looking at SMS providers. If you find another provider with lower pricing make sure to ask about their deliverability to the countries you are wanting to send to. There is a big difference between sending an SMS at a low price, and actually delivering that SMS. We are happy to discuss deliverability with you, just reach out to your Account Manager if you have one or contact us to start a conversation about your migration.
  • I’m not happy with the customer support of my current provider
    • The SMS landscape is constantly changing and our SMS experts are here to help guide you through the process. Whether it’s regulatory changes, pricing changes, or creating complex architectures to support your needs. Reach out to your Account Manager if you have one or contact us to start a conversation about your migration and get your questions answered.

Regardless of your reason for considering migrating there are four scenarios that most of our customers find themselves in when beginning to plan for an SMS migration.

I have not sent SMS before but I would like to start sending through Pinpoint
Skip ahead to the section on “Checklist for Planning an SMS Migration” to start planning for sending SMS

I have number(s) (Also known as Originators, Origination Identities (OIDs), Toll-Free, 10DLC, Long Code, Short Code, and/or SenderID) with a different provider and I would like to move those to Pinpoint
The ability to “port” numbers from other providers is dependent on the type of originator, the vendor you procured them from, and the country that they support. You may need to get new originators so factor that into your timeline and reach out to your Account Manager to determine whether your originators are able to be ported over. Once you have done that, pull the reports for how much volume you are sending to each country with your current provider and then skip ahead to the section on “Checklist for Planning an SMS Migration” to start planning for sending SMS

I have a current provider but I would like to procure new numbers from Pinpoint
Pull the reports for how much volume you are sending to each country with your current provider and then skip ahead to the section on “Checklist for Planning an SMS Migration” to start planning for sending SMS

I have a current provider but would like to split traffic between them and Pinpoint
Pull the reports for how much volume you are sending to the countries you plan on migrating to AWS and then skip ahead to the section on “Checklist for Planning an SMS Migration” to start planning for sending SMS. Make sure that you consider how you will be managing opt-outs across two providers. Pinpoint offers centrally managed opt-outs but self-management is also an option. All Delivery Receipts/Reporting (DLRs) and inbound/outbound events can be streamed through Amazon Kinesis, Amazon CloudWatch, and/or Amazon Simple Notification Service (SNS) if you need to send those events to another location inside or outside of the AWS Cloud.

Checklist for Planning an SMS Migration

  • Setup a spreadsheet similar to the one outlined in this post
  • Identify your use case(s)
    • Note whether your use case is one-way or two-way
      • NOTE: Not all countries support 2-way communications, which is the ability to have the recipient send a message back to the OID.
      • NOTE: Sender ID also does not support 2-way communication so if you are planning on using Sender ID you will need to account for how to opt recipients out of future communications.
  • Identify your countries
  • Identify your volume per country
    • If you are already sending SMS with another provider pull a report over a representative time period.
  • Identify your throughput needs (Also referred to as Messages per Second, MPS, Transactions per Second, or TPS) for each country
    • Most origination identities are chosen for their ability to support a certain level of MPS, not volume, so if you have seasonality make sure to account for burst rates. There are quotas for the APIs that govern sending as well as quotas for the different types of originators.
  • Identify which origination identities you will need for each country using this guide
    • Make note of any countries/OIDs that require registration
    • Reach out to your Account Manager if you have one or contact us to start a conversation about your migration.
    • If you have OIDs you would like to migrate make sure you determine whether that is possible ASAP since your timelines could be affected by the outcome.

Make sure you give ample time for your migration. There are many entities involved in delivering SMS, from governments, to mobile carriers, to third-party registrars, and more, which means that timelines are not always within your control. Ask questions, take advantage of the expert resources we have at AWS, and the content we have produced around these topics.

Content to read

  • Review the countries and regions we support here
  • Use the format for aggregating information on your use cases outlined in this post here
  • Decide what origination IDs you will need here
  • Review the documentation for the V2 SMS and Voice API here
  • Review the Pinpoint API and SendMessage here
  • Check out the support tiers comparison here

Building a generative AI Marketing Portal on AWS

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/building-a-generative-ai-marketing-portal-on-aws/

Introduction

In the preceding entries of this series, we examined the transformative impact of Generative AI on marketing strategies in “Building Generative AI into Marketing Strategies: A Primer” and delved into the intricacies of Prompt Engineering to enhance the creation of marketing content with services such as Amazon Bedrock in “From Prompt Engineering to Auto Prompt Optimisation”. We also explored the potential of Large Language Models (LLMs) to refine prompts for more effective customer engagement.

Continuing this exploration, we will articulate how Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint can be leveraged to construct a marketer portal that not only facilitates AI-driven content generation but also personalizes and distributes this content effectively. The aim is to provide a clear blueprint for deploying a system that crafts, personalizes, and distributes marketing content efficiently. This blog will guide you through the deployment process, underlining the real-world utility of these services in optimizing marketing workflows. Through use cases and a code demonstration, we’ll see these technologies in action, offering a hands-on perspective on enhancing your marketing pipeline with AI-driven solutions.

The Challenge with Content Generation in Marketing

Many companies struggle to streamline their marketing operations effectively, facing hurdles at various stages of the marketing operations pipeline. Below, we list the challenges at three main stages of the pipeline: content generation, content personalization, and content distribution.

Content Generation

Creating high-quality, engaging content is often easier said than done. Companies need to invest in skilled copywriters or content creators who understand not just the product but also the target audience. Even with the right talent, the process can be time-consuming and costly. Moreover, generating content at scale while maintaining quality and compliance to industry regulations is the key blocker for many companies considering adopting generative AI technologies in production environments.

Content Personalization

Once the content is created, the next hurdle is personalization. In today’s digital age, generic content rarely captures attention. Customers expect content tailored to their needs, preferences, and behaviors. However, personalizing content is not straightforward. It requires a deep understanding of customer data, which often resides in siloed databases, making it difficult to create a 360-degree view of the customer.

Content Distribution

Finally, even the most captivating, personalized content is ineffective if it doesn’t reach the right audience at the right time. Companies often grapple with choosing the appropriate channels for content distribution, be it email, social media, or mobile notifications. Additionally, ensuring that the content complies with various regulations and doesn’t end up in spam folders adds another layer of complexity to the distribution phase. Sending at scale requires paying attention to deliverability, security and reliability which often poses significant challenges to marketers.

By addressing these challenges, companies can significantly improve their marketing operations and empower their marketers to be more effective. But how can this be achieved efficiently and at scale? The answer lies in leveraging the power of Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint, as we will explore in the following solution.

The Solution In Action

Before we dive into the details of the implementation, let’s take a look at the end result through the linked demo video.

Use Case 1: Banking/Financial Services Industry

You are a relationship manager working in the Consumer Banking department of a fictitious company called AnyCompany Bank. You are assigned a group of customers and would like to send out personalized and targeted communications to the channel of choice to every members of this group of customer.

Behind the scene, the marketer is utilizing Amazon Pinpoint to create the segment of customers they would like to target. The customers’ information and the marketer’s prompt are then fed into Amazon Bedrock to generate the marketing content, which is then sent to the customer via SMS and email using Amazon Pinpoint.

  • In the Prompt Iterator page, you can employ a process called “prompt engineering” to further optimize your prompt to maximize the effectiveness of your marketing campaigns. Please refer to this blog on the process behind engineering the prompt as well as how to apply an additional LLM model for auto-prompting. To get started, simply copy the sample banking prompt which has gone through the prompt engineering process in this page.
  • Next, you can either upload your customer group by uploading a .csv file (through “Importing a Segment”) or specify a customer group using pre-defined filter criteria based on your current customer database using Amazon Pinpoint.

UseCase1Segment

E.g.: The screenshot shows a sample filtered segment named ManagementOrRetired that only filters to customers who are management or retirees.

  • Once done, you can log into the marketer portal and choose the relevant segment that you’ve just created within the Amazon Pinpoint console.

PinpointSegment

  • You can then preview the customers and their information stored in your Amazon Pinpoint’s customer database. Once satisfied, we’re ready to start generating content for those customers!
  • Click on 1:1 Content Generator tab, your content is automatically generated for your first customer. Here, you can cycle through your customers one by one, and depending on the customer’s preferred language and channel, an email or SMS in the preferred language is automatically generated for them.
    • Generated SMS in English

PostiveSMS

    • A negative example showing proper prompt-engineering at work to moderate content. This happens if we try to insert data that does not make sense for the marketing content generator to output. In this case, the marketing generator refuses to output (justifiably) an advertisement for a 6-year-old on a secured instalment loan.

NegativeSMS

  • Finally, we choose to send the generated content via Amazon Pinpoint by clicking on “Send with Amazon Pinpoint”. In the back end, Amazon Pinpoint will orchestrate the sending of the email/SMS through the appropriate channels.
    • Alternatively, if the auto-generated content still did not meet your needs and you want to generate another draft, you can Disagree and try again.

Use Case 2: Travel & Hospitality

You are a marketing executive that’s working for an online air ticketing agency. You’ve been tasked to promote a specific flight from Singapore to Hong Kong for AnyCompany airline. You’d first like to identify which customers would be prime candidates to promote this flight leg to and then send out hyper-personalized message to them.

Behind the scene, instead of using Amazon Pinpoint to manually define the segment, the marketer in this case is leveraging AIML capabilities of Amazon Personalize to define the best group of customers to recommend the specific flight leg to them. Similar to the above use case, the customers’ information and LLM prompt are fed into the Amazon Bedrock, which generates the marketing content that is eventually sent out via Amazon Pinpoint.

  • Similar to the above use case, you’d need to go through a prompt engineering process to ensure that the content the LLM model is generating will be relevant and safe for use. To get started quickly, go to the Prompt Iterator page, you can use the sample airlines prompt and iterate from there.
  • Your company offers many different flight legs, aggregated from many different carriers. You first filter down to the flight leg that you want to promote using the Filters on the left. In this case, we are filtering for flights originating from Singapore (SRCCity) and going to Hong Kong (DSTCity), operated by AnyCompany Airlines.

PersonalizeInstructions

  • Now, let’s choose the number of customers that you’d like to generate. Once satisfied, you choose to start the batch segmentation job.
  • In the background, Amazon Personalize generates a group of customers that are most likely to be interested in this flight leg based on past interactions with similar flight itineraries.
  • Once the segmentation job is finished as shown, you can fetch the recommended group of customers and start generating content for them immediately, similar to the first use case.

Setup instructions

The setup instructions and deployment details can be found in the GitHub link.

Conclusion

In this blog, we’ve explored the transformative potential of integrating Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint to address the common challenges in marketing operations. By automating the content generation with Amazon Bedrock, personalizing at scale with Amazon Personalize, and ensuring precise content distribution with Amazon Pinpoint, companies can not only streamline their marketing processes but also elevate the customer experience.

The benefits are clear: time-saving through automation, increased operational efficiency, and enhanced customer satisfaction through personalized engagement. This integrated solution empowers marketers to focus on strategy and creativity, leaving the heavy lifting to AWS’s robust AI and ML services.

For those ready to take the next step, we’ve provided a comprehensive guide and resources to implement this solution. By following the setup instructions and leveraging the provided prompts as a starting point, you can deploy this solution and begin customizing the marketer portal to your business’ needs.

Call to Action

Don’t let the challenges of content generation, personalization, and distribution hold back your marketing potential. Deploy the Generative AI Marketer Portal today, adapt it to your specific needs, and watch as your marketing operations transform. For a hands-on start and to see this solution in action, visit the GitHub repository for detailed setup instructions.

Have a question? Share your experiences or leave your questions in the comment section.

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Philipp Kaindl

Philipp Kaindl

Philipp Kaindl is a Senior Artificial Intelligence and Machine Learning Solutions Architect at AWS. With a background in data science and
mechanical engineering his focus is on empowering customers to create lasting business impact with the help of AI. Outside of work, Philipp enjoys tinkering with 3D printers, sailing and hiking.

Bruno Giorgini

Bruno Giorgini

Bruno Giorgini is a Senior Solutions Architect specializing in Pinpoint and SES. With over two decades of experience in the IT industry, Bruno has been dedicated to assisting customers of all sizes in achieving their objectives. When he is not crafting innovative solutions for clients, Bruno enjoys spending quality time with his wife and son, exploring the scenic hiking trails around the SF Bay Area.

Kafka on Kubernetes: Reloaded for fault tolerance

Post Syndicated from Grab Tech original https://engineering.grab.com/kafka-on-kubernetes

Introduction

Coban – Grab’s real-time data streaming platform – has been operating Kafka on Kubernetes with Strimzi in
production for about two years. In a previous article (Zero trust with Kafka), we explained how we leveraged Strimzi to enhance the security of our data streaming offering.

In this article, we are going to describe how we improved the fault tolerance of our initial design, to the point where we no longer need to intervene if a Kafka broker is unexpectedly terminated.

Problem statement

We operate Kafka in the AWS Cloud. For the Kafka on Kubernetes design described in this article, we rely on Amazon Elastic Kubernetes Service (EKS), the managed Kubernetes offering by AWS, with the worker nodes deployed as self-managed nodes on Amazon Elastic Compute Cloud (EC2).

To make our operations easier and limit the blast radius of any incidents, we deploy exactly one Kafka cluster for each EKS cluster. We also give a full worker node to each Kafka broker. In terms of storage, we initially relied on EC2 instances with non-volatile memory express (NVMe) instance store volumes for
maximal I/O performance. Also, each Kafka cluster is accessible beyond its own Virtual Private Cloud (VPC) via a VPC Endpoint Service.

Fig. 1 Initial design of a 3-node Kafka cluster running on Kubernetes.

Fig. 1 shows a logical view of our initial design of a 3-node Kafka on Kubernetes cluster, as typically run by Coban. The Zookeeper and Cruise-Control components are not shown for clarity.

There are four Kubernetes services (1): one for the initial connection – referred to as “bootstrap” – that redirects incoming traffic to any Kafka pods, plus one for each Kafka pod, for the clients to target each Kafka broker individually (a requirement to produce or consume from/to a partition that resides on any particular Kafka broker). Four different listeners on the Network Load Balancer (NLB) listening on four different TCP ports, enable the Kafka clients to target either the bootstrap
service or any particular Kafka broker they need to reach. This is very similar to what we previously described in Exposing a Kafka Cluster via a VPC Endpoint Service.

Each worker node hosts a single Kafka pod (2). The NVMe instance store volume is used to create a Kubernetes Persistent Volume (PV), attached to a pod via a Kubernetes Persistent Volume Claim (PVC).

Lastly, the worker nodes belong to Auto-Scaling Groups (ASG) (3), one by Availability Zone (AZ). Strimzi adds in node affinity to make sure that the brokers are evenly distributed across AZs. In this initial design, ASGs are not for auto-scaling though, because we want to keep the size of the cluster under control. We only use ASGs – with a fixed size – to facilitate manual scaling operation and to automatically replace the terminated worker nodes.

With this initial design, let us see what happens in case of such a worker node termination.

Fig. 2 Representation of a worker node termination. Node C is terminated and replaced by node D. However the Kafka broker 3 pod is unable to restart on node D.

Fig. 2 shows the worker node C being terminated along with its NVMe instance store volume C, and replaced (by the ASG) by a new worker node D and its new, empty NVMe instance store volume D. On start-up, the worker node D automatically joins the Kubernetes cluster. The Kafka broker 3 pod that was running on the faulty worker node C is scheduled to restart on the new worker node D.

Although the NVMe instance store volume C is terminated along with the worker node C, there is no data loss because all of our Kafka topics are configured with a minimum of three replicas. The data is poised to be copied over from the surviving Kafka brokers 1 and 2 back to Kafka broker 3, as soon as Kafka broker 3 is effectively restarted on the worker node D.

However, there are three fundamental issues with this initial design:

  1. The Kafka clients that were in the middle of producing or consuming to/from the partition leaders of Kafka broker 3 are suddenly facing connection errors, because the broker was not gracefully demoted beforehand.
  2. The target groups of the NLB for both the bootstrap connection and Kafka broker 3 still point to the worker node C. Therefore, the network communication from the NLB to Kafka broker 3 is broken. A manual reconfiguration of the target groups is required.
  3. The PVC associating the Kafka broker 3 pod with its instance store PV is unable to automatically switch to the new NVMe instance store volume of the worker node D. Indeed, static provisioning is an intrinsic characteristic of Kubernetes local volumes. The PVC is still in Bound state, so Kubernetes does not take any action. However, the actual storage beneath the PV does not exist anymore. Without any storage, the Kafka broker 3 pod is unable to start.

At this stage, the Kafka cluster is running in a degraded state with only two out of three brokers, until a Coban engineer intervenes to reconfigure the target groups of the NLB and delete the zombie PVC (this, in turn, triggers its re-creation by Strimzi, this time using the new instance store PV).

In the next section, we will see how we have managed to address the three issues mentioned above to make this design fault-tolerant.

Solution

Graceful Kafka shutdown

To minimise the disruption for the Kafka clients, we leveraged the AWS Node Termination Handler (NTH). This component provided by AWS for Kubernetes environments is able to cordon and drain a worker node that is going to be terminated. This draining, in turn, triggers a graceful shutdown of the Kafka
process by sending a polite SIGTERM signal to all pods running on the worker node that is being drained (instead of the brutal SIGKILL of a normal termination).

The termination events of interest that are captured by the NTH are:

  • Scale-in operations by an ASG.
  • Manual termination of an instance.
  • AWS maintenance events, typically EC2 instances scheduled for upcoming retirement.

This suffices for most of the disruptions our clusters can face in normal times and our common maintenance operations, such as terminating a worker node to refresh it. Only sudden hardware failures (AWS issue events) would fall through the cracks and still trigger errors on the Kafka client side.

The NTH comes in two modes: Instance Metadata Service (IMDS) and Queue Processor. We chose to go with the latter as it is able to capture a broader range of events, widening the fault tolerance capability.

Scale-in operations by an ASG

Fig. 3 Architecture of the NTH with the Queue Processor.

Fig. 3 shows the NTH with the Queue Processor in action, and how it reacts to a scale-in operation (typically triggered manually, during a maintenance operation):

  1. As soon as the scale-in operation is triggered, an Auto Scaling lifecycle hook is invoked to pause the termination of the instance.
  2. Simultaneously, an Auto Scaling lifecycle hook event is issued to an Amazon Simple Queue Service (SQS) queue. In Fig. 3, we have also materialised EC2 events (e.g. manual termination of an instance, AWS maintenance events, etc.) that transit via Amazon EventBridge to eventually end up in the same SQS queue. We will discuss EC2 events in the next two sections.
  3. The NTH, a pod running in the Kubernetes cluster itself, constantly polls that SQS queue.
  4. When a scale-in event pertaining to a worker node of the Kubernetes cluster is read from the SQS queue, the NTH sends to the Kubernetes API the instruction to cordon and drain the impacted worker node.
  5. On draining, Kubernetes sends a SIGTERM signal to the Kafka pod residing on the worker node.
  6. Upon receiving the SIGTERM signal, the Kafka pod gracefully migrates the leadership of its leader partitions to other brokers of the cluster before shutting down, in a transparent manner for the clients. This behaviour is ensured by the controlled.shutdown.enable parameter of Kafka, which is enabled by default.
  7. Once the impacted worker node has been drained, the NTH eventually resumes the termination of the instance.

Strimzi also comes with a terminationGracePeriodSeconds parameter, which we have set to 180 seconds to give the Kafka pods enough time to migrate all of their partition leaders gracefully on termination. We have verified that this is enough to migrate all partition leaders on our Kafka clusters (about 60 seconds for 600 partition leaders).

Manual termination of an instance

The Auto Scaling lifecycle hook that pauses the termination of an instance (Fig. 3, step 1) as well as the corresponding resuming by the NTH (Fig. 3, step 7) are invoked only for ASG scaling events.

In case of a manual termination of an EC2 instance, the termination is captured as an EC2 event that also reaches the NTH. Upon receiving that event, the NTH cordons and drains the impacted worker node. However, the instance is immediately terminated, most likely before the leadership of all of its Kafka partition leaders has had the time to get migrated to other brokers.

To work around this and let a manual termination of an EC2 instance also benefit from the ASG lifecycle hook, the instance must be terminated using the terminate-instance-in-auto-scaling-group AWS CLI command.

AWS maintenance events

For AWS maintenance events such as instances scheduled for upcoming retirement, the NTH acts immediately when the event is first received (typically adequately in advance). It cordons and drains the soon-to-be-retired worker node, which in turn triggers the SIGTERM signal and the graceful termination of Kafka as described above. At this stage, the impacted instance is not terminated, so the Kafka partition leaders have plenty of time to complete their migration to other brokers.

However, the evicted Kafka pod has nowhere to go. There is a need for spinning up a new worker node for it to be able to eventually restart somewhere.

To make this happen seamlessly, we doubled the maximum size of each of our ASGs and installed the Kubernetes Cluster Autoscaler. With that, when such a maintenance event is received:

  • The worker node scheduled for retirement is cordoned and drained by the NTH. The state of the impacted Kafka pod becomes Pending.
  • The Kubernetes Cluster Autoscaler comes into play and triggers the corresponding ASG to spin up a new EC2 instance that joins the Kubernetes cluster as a new worker node.
  • The impacted Kafka pod restarts on the new worker node.
  • The Kubernetes Cluster Autoscaler detects that the previous worker node is now under-utilised and terminates it.

In this scenario, the impacted Kafka pod only remains in Pending state for about four minutes in total.

In case of multiple simultaneous AWS maintenance events, the Kubernetes scheduler would honour our PodDisruptionBudget and not evict more than one Kafka pod at a time.

Dynamic NLB configuration

To automatically map the NLB’s target groups with a newly spun up EC2 instance, we leveraged the AWS Load Balancer Controller (LBC).

Let us see how it works.

Fig. 4 Architecture of the LBC managing the NLB’s target groups via TargetGroupBinding custom resources.

Fig. 4 shows how the LBC automates the reconfiguration of the NLB’s target groups:

  1. It first retrieves the desired state described in Kubernetes custom resources (CR) of type TargetGroupBinding. There is one such resource per target group to maintain. Each TargetGroupBinding CR associates its respective target group with a Kubernetes service.
  2. The LBC then watches over the changes of the Kubernetes services that are referenced in the TargetGroupBinding CRs’ definition, specifically the private IP addresses exposed by their respective Endpoints resources.
  3. When a change is detected, it dynamically updates the corresponding NLB’s target groups with those IP addresses as well as the TCP port of the target containers (containerPort).

This automated design sets up the NLB’s target groups with IP addresses (targetType: ip) instead of EC2 instance IDs (targetType: instance). Although the LBC can handle both target types, the IP address approach is actually more straightforward in our case, since each pod has a routable private IP address in the AWS subnet, thanks to the AWS Container Networking Interface (CNI) plug-in.

This dynamic NLB configuration design comes with a challenge. Whenever we need to update the Strimzi CR, the rollout of the change to each Kafka pod in a rolling update fashion is happening too fast for the NLB. This is because the NLB inherently takes some time to mark each target as healthy before enabling it. The Kafka brokers that have just been rolled out start advertising their broker-specific endpoints to the Kafka clients via the bootstrap service, but those
endpoints are actually not immediately available because the NLB is still checking their health. To mitigate this, we have reduced the HealthCheckIntervalSeconds and HealthyThresholdCount parameters of each target group to their minimum values of 5 and 2 respectively. This reduces the maximum delay for the NLB to detect that a target has become healthy to 10 seconds. In addition, we have configured the LBC with a Pod Readiness Gate. This feature makes the Strimzi rolling deployment wait for the health check of the NLB to pass, before marking the current pod as Ready and proceeding with the next pod.

Fig. 5 Steps for a Strimzi rolling deployment with a Pod Readiness Gate. Only one Kafka broker and one NLB listener and target group are shown for simplicity.

Fig. 5 shows how the Pod Readiness Gate works during a Strimzi rolling deployment:

  1. The old Kafka pod is terminated.
  2. The new Kafka pod starts up and joins the Kafka cluster. Its individual endpoint for direct access via the NLB is immediately advertised by the Kafka cluster. However, at this stage, it is not reachable, as the target group of the NLB still points to the IP address of the old Kafka pod.
  3. The LBC updates the target group of the NLB with the IP address of the new Kafka pod, but the NLB health check has not yet passed, so the traffic is not forwarded to the new Kafka pod just yet.
  4. The LBC then waits for the NLB health check to pass, which takes 10 seconds. Once the NLB health check has passed, the NLB resumes forwarding the traffic to the Kafka pod.
  5. Finally, the LBC updates the pod readiness gate of the new Kafka pod. This informs Strimzi that it can proceed with the next pod of the rolling deployment.

Data persistence with EBS

To address the challenge of the residual PV and PVC of the old worker node preventing Kubernetes from mounting the local storage of the new worker node after a node rotation, we adopted Elastic Block Store (EBS) volumes instead of NVMe instance store volumes. Contrary to the latter, EBS volumes can conveniently be attached and detached. The trade-off is that their performance is significantly lower.

However, relying on EBS comes with additional benefits:

  • The cost per GB is lower, compared to NVMe instance store volumes.
  • Using EBS decouples the size of an instance in terms of CPU and memory from its storage capacity, leading to further cost savings by independently right-sizing the instance type and its storage. Such a separation of concerns also opens the door to new use cases requiring disproportionate amounts of storage.
  • After a worker node rotation, the time needed for the new node to get back in sync is faster, as it only needs to catch up the data that was produced during the downtime. This leads to shorter maintenance operations and higher iteration speed. Incidentally, the associated inter-AZ traffic cost is also lower, since there is less data to transfer among brokers during this time.
  • Increasing the storage capacity is an online operation.
  • Data backup is supported by taking snapshots of EBS volumes.

We have verified with our historical monitoring data that the performance of EBS General Purpose 3 (gp3) volumes is significantly above our maximum historical values for both throughput and I/O per second (IOPS), and we have successfully benchmarked a test EBS-based Kafka cluster. We have also set up new monitors to be alerted in case we need to
provision either additional throughput or IOPS, beyond the baseline of EBS gp3 volumes.

With that, we updated our instance types from storage optimised instances to either general purpose or memory optimised instances. We added the Amazon EBS Container Storage Interface (CSI) driver to the Kubernetes cluster and created a new Kubernetes storage class to let the cluster dynamically provision EBS gp3 volumes.

We configured Strimzi to use that storage class to create any new PVCs. This makes Strimzi able to automatically create the EBS volumes it needs, typically when the cluster is first set up, but also to attach/detach the volumes to/from the EC2 instances whenever a Kafka pod is relocated to a different worker node.

Note that the EBS volumes are not part of any ASG Launch Template, nor do they scale automatically with the ASGs.

Fig. 6 Steps for the Strimzi Operator to create an EBS volume and attach it to a new Kafka pod.

Fig. 6 illustrates how this works when Strimzi sets up a new Kafka broker, for example the first broker of the cluster in the initial setup:

  1. The Strimzi Cluster Operator first creates a new PVC, specifying a volume size and EBS gp3 as its storage class. The storage class is configured with the EBS CSI Driver as the volume provisioner, so that volumes are dynamically provisioned [1]. However, because it is also set up with volumeBindingMode: WaitForFirstConsumer, the volume is not yet provisioned until a pod actually claims the PVC.
  2. The Strimzi Cluster Operator then creates the Kafka pod, with a reference to the newly created PVC. The pod is scheduled to start, which in turn claims the PVC.
  3. This triggers the EBS CSI Controller. As the volume provisioner, it dynamically creates a new EBS volume in the AWS VPC, in the AZ of the worker node where the pod has been scheduled to start.
  4. It then attaches the newly created EBS volume to the corresponding EC2 instance.
  5. After that, it creates a Kubernetes PV with nodeAffinity and claimRef specifications, making sure that the PV is reserved for the Kafka broker 1 pod.
  6. Lastly, it updates the PVC with the reference of the newly created PV. The PVC is now in Bound state and the Kafka pod can start.

One important point to take note of is that EBS volumes can only be attached to EC2 instances residing in their own AZ. Therefore, when rotating a worker node, the EBS volume can only be re-attached to the new instance if both old and new instances reside in the same AZ. A simple way to guarantee this is to set up one ASG per AZ, instead of a single ASG spanning across 3 AZs.

Also, when such a rotation occurs, the new broker only needs to synchronise the recent data produced during the brief downtime, which is typically an order of magnitude faster than replicating the entire volume (depending on the overall retention period of the hosted Kafka topics).

Table 1 Comparison of the resynchronization of the Kafka data after a broker rotation between the initial design and the new design with EBS volumes.
Initial design (NVMe instance store volumes) New design (EBS volumes)
Data to synchronise All of the data Recent data produced during the brief downtime
Function of (primarily) Retention period Downtime
Typical duration Hours Minutes

Outcome

With all that, let us revisit the initial scenario, where a malfunctioning worker node is being replaced by a fresh new node.

Fig. 7 Representation of a worker node termination after implementing the solution. Node C is terminated and replaced by node D. This time, the Kafka broker 3 pod is able to start and serve traffic.

Fig. 7 shows the worker node C being terminated and replaced (by the ASG) by a new worker node D, similar to what we have described in the initial problem statement. The worker node D automatically joins the Kubernetes cluster on start-up.

However, this time, a seamless failover takes place:

  1. The Kafka clients that were in the middle of producing or consuming to/from the partition leaders of Kafka broker 3 are gracefully redirected to Kafka brokers 1 and 2, where Kafka has migrated the leadership of its leader partitions.
  2. The target groups of the NLB for both the bootstrap connection and Kafka broker 3 are automatically updated by the LBC. The connectivity between the NLB and Kafka broker 3 is immediately restored.
  3. Triggered by the creation of the Kafka broker 3 pod, the Amazon EBS CSI driver running on the worker node D re-attaches the EBS volume 3 that was previously attached to the worker node C, to the worker node D instead. This enables Kubernetes to automatically re-bind the corresponding PV and PVC to Kafka broker 3 pod. With its storage dependency resolved, Kafka broker 3 is able to start successfully and re-join the Kafka cluster. From there, it only needs to catch up with the new data that was produced
    during its short downtime, by replicating it from Kafka brokers 1 and 2.

With this fault-tolerant design, when an EC2 instance is being retired by AWS, no particular action is required from our end.

Similarly, our EKS version upgrades, as well as any operations that require rotating all worker nodes of the cluster in general, are:

  • Simpler and less error-prone: We only need to rotate each instance in sequence, with no need for manually reconfiguring the target groups of the NLB and deleting the zombie PVCs anymore.
  • Faster: The time between each instance rotation is limited to the short amount of time it takes for the restarted Kafka broker to catch up with the new data.
  • More cost-efficient: There is less data to transfer across AZs (which is charged by AWS).

It is worth noting that we have chosen to omit Zookeeper and Cruise Control in this article, for the sake of clarity and simplicity. In reality, all pods in the Kubernetes cluster – including Zookeeper and Cruise Control – now benefit from the same graceful stop, triggered by the AWS termination events and the NTH. Similarly, the EBS CSI driver improves the fault tolerance of any pods that use EBS volumes for persistent storage, which includes the Zookeeper pods.

Challenges faced

One challenge that we are facing with this design lies in the EBS volumes’ management.

On the one hand, the size of EBS volumes cannot be increased consecutively before the end of a cooldown period (minimum of 6 hours and can exceed 24 hours in some cases [2]). Therefore, when we need to urgently extend some EBS volumes because the size of a Kafka topic is suddenly growing, we need to be relatively generous when sizing the new required capacity and add a comfortable security margin, to make sure that we are not running out of storage in the short run.

On the other hand, shrinking a Kubernetes PV is not a supported operation. This can affect the cost efficiency of our design if we overprovision the storage capacity by too much, or in case the workload of a particular cluster organically diminishes.

One way to mitigate this challenge is to tactically scale the cluster horizontally (ie. adding new brokers) when there is a need for more storage and the existing EBS volumes are stuck in a cooldown period, or when the new storage need is only temporary.

What’s next?

In the future, we can improve the NTH’s capability by utilising webhooks. Upon receiving events from SQS, the NTH can also forward the events to the specified webhook URLs.

This can potentially benefit us in a few ways, e.g.:

  • Proactively spinning up a new instance without waiting for the old one to be terminated, whenever a termination event is received. This would shorten the rotation time even further.
  • Sending Slack notifications to Coban engineers to keep them informed of any actions taken by the NTH.

We would need to develop and maintain an application that receives webhook events from the NTH and performs the necessary actions.

In addition, we are also rolling out Karpenter to replace the Kubernetes Cluster Autoscaler, as it is able to spin up new instances slightly faster, helping reduce the four minutes delay a Kafka pod remains in Pending state during a node rotation. Incidentally, Karpenter also removes the need for setting up one ASG by AZ, as it is able to deterministically provision instances in a specific AZ, for example where a particular EBS volume resides.

Lastly, to ensure that the performance of our EBS gp3 volumes is both sufficient and cost-efficient, we want to explore autoscaling their throughput and IOPS beyond the baseline, based on the usage metrics collected by our monitoring stack.

References

[1] Dynamic Volume Provisioning | Kubernetes

[2] Troubleshoot EBS volume stuck in Optimizing state during modification | AWS re:Post

We would like to thank our team members and Grab Kubernetes gurus that helped review and improve this blog before publication: Will Ho, Gable Heng, Dewin Goh, Vinnson Lee, Siddharth Pandey, Shi Kai Ng, Quang Minh Tran, Yong Liang Oh, Leon Tay, Tuan Anh Vu.

Join us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Expanded Coverage and AWS Compliance Pack Updates in InsightCloudSec Coming Out of AWS Re:Invent 2023

Post Syndicated from Lara Sunday original https://blog.rapid7.com/2023/12/20/expanded-coverage-and-aws-compliance-pack-updates-in-insightcloudsec-coming-out-of-aws-re-invent-2023/

Expanded Coverage and AWS Compliance Pack Updates in InsightCloudSec Coming Out of AWS Re:Invent 2023

It seems like it was just yesterday that we were in Las Vegas for AWS Re:Invent, but it’s already been almost two weeks since the conference wrapped up. As is always the case, AWS unveiled a host of new services throughout the week, including advancements around serverless, artificial intelligence (AI) and Machine Learning (ML), security and more.

There were a ton of really exciting announcements, but a few stood out to me. Before we dive into the new and updated services we now support in InsightCloudSec, let’s take a second to highlight a few of them and why they’re of note.

Highlights from AWS’ New Service Announcements during Re:Invent

Amazon Bedrock general availability was announced back in October, re:Invent brought with it announcements of new capabilities including customized models, GenAI applications to execute multi-step tasks, and Guardrails announced in preview. New Security Hub functionalities were introduced, including centralized governance, custom controls and a refresh of the dashboard.

Serverless innovations include updates to Amazon Aurora Limitless Database, Amazon ElasticCache Serverless, and AI-driven Amazon Redshift Serverless adding greater scaling and efficiency to their database and analytics offerings. Serverless architectures bring scalability and flexibility, however security and risk considerations shift away from traditional network traffic inspection and access control lists, towards IAM hygiene, system identity behavioral analysis along with code integrity and validation.

Amazon Datazone general availability, like Bedrock, was originally announced in October and got some new innovations showcased during Re:Invent including business driven domains and data catalog, projects and environments, and the ability for data workers to publish and data consumers to subscribe to workflows. Available in open preview for Datazone are automated, AI-driven recommendations for metadata-driven business descriptions and specific columns and analytical applications based on business units.

One of the most exciting announcements from Re:Invent this year was Amazon Q, Amazon’s new GenAI-powered Virtual Assistant. Q was also integrated into Amazon’s Business Intelligence (BI) service, QuickSight, which has been supported in InsightCloudSec for some time now.

Having released our support for Amazon OpenSearch last year, this year’s re:Invent brought some exciting updates that are worth mentioning here. Now generally available is Vector Engine for OpenSearch Serverless, which enables users to store and quickly search vector embeddings for GenAI applications. AWS also announced the OR1 Instance family, which is compute optimized specifically for OpenSearch and also a new zero-ETL integration with S3.

Expanded Resource Coverage in InsightCloudSec

It’s very important to us here at Rapid7 that we provide our customers with the peace of mind to know when their teams leave these events and begin implementing new innovations from AWS that they’re doing so securely. To that end, the days and weeks following Re:Invent is always a bit of a sprint, and this year was no exception.

The Coverage and Analysis team loves a challenge though, and in my totally unbiased opinion — we’ve delivered something special. Our latest release featured new support for a variety of the new services announced during Re:Invent, as well as, a number of existing services we’ve expanded support for in relation to updates announced by AWS. We’ve added support for 6 new services that were either announced or updated during the show. We’ve also added 25 new Insights, all of which have been applied to our existing AWS Foundational Security Best Practices pack, AWS Center for Internet Security (CIS) 2.0 compliance pack, as well as new AWS relevant updates to NIST SP800-53 (Rev 5).

The newly supported services are:

  • Bedrock, a fully managed service that allows users to build generative AI applications in the cloud by providing a set of foundational models both from AWS and 3rd party vendors.
  • Clean Rooms, which enables customers to collaborate and analyze data securely in ‘clean rooms’ in minutes with any other company on joint initiatives without sharing real raw data.
  • AWS Control Tower (January 2024 Release), a management service that can be used to create and orchestrate a multi-account AWS environment in accordance with AWS best practices including the Well-Architected Framework.

Along with support for newly-added services, we’ve also expanded our coverage around the host of existing services as well. We’ve added or expanded support for the following security and serverless solutions:

  • Network Firewall, which provides fine-grained control over network traffic.
  • Security Hub, an AWS’ native service that provides CSPM functionality, aggregating security and compliance checks.
  • Glue, a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources, empowering your analytics and ML projects.

Helping Teams Securely Build AI/ML Applications in the Cloud

One of the most exciting elements to come out of the past few weeks with the addition of AWS Bedrock, is our extended coverage for AI and ML solutions that we are now able to provide across cloud providers for our customers. Supporting AWS Bedrock, along with GCP Vertex and Azure OpenAI Service has enabled us to build a very exciting new feature as part of our Compliance Packs.

Machine learning, artificial intelligence, and analytics were driving themes of this year’s conference, so it makes me very happy to announce that we now offer a dedicated Rapid7 AI/ML Security Best Practices compliance pack. If interested, I highly recommend you keep an eye out in the coming days for my colleague Kathryn Lynas-Blunt’s blog discussing how Rapid7 enables teams to securely build AI applications in the cloud.

As a cloud enthusiast, AWS re:Invent never fails to deliver on innovation, excitement and shared learning experiences. As we continue our partnership with AWS, I’m very excited for all that 2024 holds in store. Until next year!

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/12/19/expanded-coverage-and-new-attack-path-visualizations-help-security-teams-prioritize-cloud-risk-and-understand-blast-radius/

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Cloud environments differ in a number of ways from more traditional on-prem environments. From the immense scale and compounding complexity to the rate of change, the cloud creates a host of challenges for security teams to navigate and grapple with. By definition, anything running in the cloud has the potential to be made publicly available, either directly or indirectly. The interconnected nature of these environments is such that when one account, resource, or service is compromised, it can be fairly easy for a bad actor to move laterally across your environment and/or grant themselves the permissions to wreak havoc. These avenues for lateral movement or privilege escalation are often referred to as attack paths.

Having a solution in place that can clearly and dynamically detect and depict these attack paths is critical to helping teams not only understand where risks exist across their environment but arguably more importantly how they are most likely to be exploited and what that means for an organization – particularly with respect to protecting high-value assets.

Detect and Remediate Attack Paths With InsightCloudSec

Attack Path Analysis in InsightCloudSec enables Rapid7 customers to see their cloud environments from the perspective of an attacker. It visualizes the various ways an attacker could gain access, move between resources, and compromise the cloud environment. Attack Paths are high fidelity signals in our risk prioritization model that focuses on identifying toxic combinations that lead to real business impact.

Since Rapid7 initially launched Attack Path Analysis, we’ve continued to roll out incremental updates to the feature, primarily in the form of expanded attack path coverage across each of the major cloud service providers (CSPs). In our most recent InsightCloudSec release (12.12.2023), we’ve continued this momentum, announcing additional attack paths as well as some exciting updates around how we visualize risk across paths and the potential blast radius should a compromised resource within an existing attack path be exploited. In this post, we’ll dive into an example of one of our recently added attack paths for Microsoft Azure along with a bit more detail about the new risk visualizations. So with that, let’s jump right in.

Expanding Coverage With New Attack Paths

First, on the coverage side of things we’ve added seven new paths in recent releases across AWS and Azure. Our AWS coverage was extended to support ECS across all of our AWS Attack Paths, and we also introduced 3 new Azure Attack paths. In the interest of brevity, we won’t cover each of them, but we do have an ever-developing list of supported attack paths you can access here on the docs page. As an example, however, let’s dive into one of the new paths we released for Azure, which identifies the presence of attack paths targeting publicly exposed instances that also have attached privileged roles.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

This type of attack path is concerning for a couple of reasons: First and foremost, an attacker could use the publicly exposed instance as an inroad to your cloud environment due to the fact that it’s publicly accessible, gaining access to sensitive data on the resource itself or accessing data the resource in question has indirect access to. Secondly, since the attached role is capable of escalating privileges, an attacker could then leverage the resource to assign themselves admin permissions which could in turn be used to open up new attack vectors.

Because this could have wide-reaching ramifications should it be exploited, we’ve assigned this a critical severity. That means we’ll want to work to resolve this as fast as possible any time this path shows up across our cloud environments, maybe even automating the process of closing down public access or adjusting the resource permissions to limit the potential for lateral movement or privilege escalation. Speaking of paths with widespread impact should they be exploited, that brings me to some other exciting updates we’ve rolled out to Attack Path Analysis.

Clearly Visualizing Risk Severity and Potential Blast Radius

As I mentioned earlier, along with expanded coverage, we’ve also updated Attack Path Analysis to make it clearer for users where your riskiest assets lie across a given attack path and to clearly show the potential blast radius of an exploitation.

To make it easier to understand the overall riskiness of an attack path and where its choke points are, we’ve added a new security view that visualizes the risk of each resource along a given path. This new view makes it very easy for security teams to immediately understand which specific resources present the highest risk and where they should be focusing their remediation efforts to block potential attackers in their tracks.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

In addition to this new security-focused view, we’ve also extended Attack Path Analysis to show a potential blast radius by displaying a graph-based topology map that helps clearly outline the various ways resources across your environment – and specifically within an attack path – interconnect with one another.

This topology map not only makes it easier for security teams to quickly hone in on what needs their attention first during an investigation, but also where a bad actor could move next. Additionally, this view helps security teams and leaders in communicating risk across the organization, particularly when engaging with non-technical stakeholders that find it difficult to understand why exactly a compromised resource presents a potentially larger risk to the business.

We will continue to expand on our existing Attack Path Analysis capabilities in the future, so be sure to keep an eye out for additional paths being added in the coming months as well as a continued effort to enable security teams to more quickly analyze cloud risk with the context needed to effectively detect, communicate, prioritize, and respond.

Monitoring AWS Cost Explorer with Zabbix

Post Syndicated from evgenii.gordymov original https://blog.zabbix.com/monitoring-aws-cost-explorer-with-zabbix/26159/

Cloud-based service platforms are becoming increasingly popular, and one of the most widely adopted is Amazon Web Services (AWS). Like many cloud services, AWS charges a user fee, which has led many users to look for a breakdown of which specific services they are being charged for. Fortunately, Zabbix has an AWS Cost Explorer over HTTP template that’s ready to run right out of the box and provides a list of daily and monthly maintenance costs.

Why monitor AWS costs?

While AWS cost data is stored for 12 months, Zabbix allows data to be stored for up to 25 years (see Keep lost resources period). The Keep lost resources period is a vital parameter for storing data longer than 12 months since the cost data removed from AWS will result in the discovered items becoming lost. Therefore, if we want to keep our cost data for a period longer than 12 months, Keep lost resources period parameter needs to be adjusted accordingly.

In addition, Zabbix can show fees charged for unavailable services, such as test deployments for a cluster in the us-east-1 region.

Preparing to monitor in a few easy steps

I recommend visiting zabbix.com/integrations/aws for any sources referred to in this tutorial. You can also find a link to all Zabbix templates there. For the most part, we will follow the steps outlined in the readme.

The AWS Cost Explorer by HTTP template can use key-based and role-based authorization. Set the following macros  {$AWS.AUTH_TYPE}, possible values: role_base, access_key (using by default).

If you are using access key-based authorization, be sure set the following macros {$AWS.ACCESS.KEY.ID}, {$AWS.SECRET.ACCESS.KEY}.

Create or use an existing access key, which you can get from Identity and Access Management (IAM).

Accessing the IAM Console:
  • Log in to your AWS Management Console.Navigate to the IAM service.
  • Next, go to the Users tab and select the required user.

Creating a access key for monitoring:
  • After that, go to the Security credentials tab.
  • Select Create access key.

Add the following required permissions to your Zabbix IAM policy in order to collect metrics.

Defining Permissions through IAM Policies:
  • Access the “Policies” section within IAM.
  • Click on “Create Policy”.
  • Select the JSON tab to define policy permissions.
  • Provide a meaningful name and description for the policy.
  • Structure the policy document based on the permissions needed for the AWS Cost Explorer by HTTP template.
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ce:GetDimensionValues", "ce:GetCostAndUsage" ], "Effect": "Allow", "Resource": "*" } ] }

Attaching Policies to the User:
  •  Go back to the “Users” section within IAM.
  •  Click on “Add Permissions”.

– Search for and select the policy created in the previous step.

– Review the attached policies to ensure they align with the intended permissions for the user.

Creating a host in Zabbix

Now, let’s create a host that will represent the metrics available via the Cost Explorer API:

  • Create a Host Group in which to put hosts related to AWS. For this example, let’s create one that we’ll call AWS Cloud.
  • Head to the host page under Configuration and click Create host. Give this host the name AWS Cost. We’ll also assign this host to the AWS Cloud group we created and attach the AWS Cost Explorer template by HTTP.
  • Click the Macros tab and select Inherited and host macros. In this case, we need to change the first two macros. The first, {$AWS.ACCESS.KEY.ID}, should be set to the received access key ID. For the second, {$AWS.SECRET.ACCESS.KEY}, the secret access key should be set to the previously retrieved value from the Security credentials tab.
  • Click Add. The AWS Cost Explorer template has three low-level discovery rules that use master items. The low-level discovery rules will start discovering resources only after the master item has collected the required data.

    The best practice is to always test such items for data. Don’t forget to fill in the required macros!

    In AWS daily costs by services and AWS monthly costs by services discovery you can filter by service, which can be specified in macros.
  • Let’s execute the master items to collect the required data on-demand. Choose both items to get data and click Execute now.

    In a few minutes, you should receive cost metrics by services for 12 months plus the current month, as well as by day. If you want the information to be stored longer, remember to change the Keep lost resources period in the LLD rule, as it’s set to 30 days by default.

Good luck!

The post Monitoring AWS Cost Explorer with Zabbix appeared first on Zabbix Blog.

Build Better Engagement Using the AWS Community Engagement Flywheel: Part 2 of 3

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/build-better-engagement-using-the-aws-community-engagement-flywheel-part-2-of-3/

Introduction

Part 2 of 3: From Cohorts to Campaigns

Businesses are constantly looking for better ways to engage with customer communities, but it’s hard to do when profile data is limited to user-completed form input or messaging campaign interaction metrics. Neither of these data sources tell a business much about their customer’s interests or preferences when they’re engaging with that community.

To bridge this gap for their community of customers, AWS Game Tech created the Cohort Modeler: a deployable solution for developers to map out and classify player relationships and identify like behavior within a player base. Additionally, the Cohort Modeler allows customers to aggregate and categorize player metrics by leveraging behavioral science and customer data. In our first blog post, we talked about how to extend Cohort Modeler’s functionality.

In this post, you’ll learn how to:

  1. Use the extension we built to create the first part of the Community Engagement Flywheel.
  2. Process the user extract from the Cohort Modeler and import the data into Amazon Pinpoint as a messaging-ready Segment.
  3. Send email to the users in the Cohort via Pinpoint’s powerful and flexible Campaign functionality.

Use Case Examples for The Cohort Modeler

For this example, we’re going to retrieve a cohort of individuals from our Cohort Modeler who we’ve identified as at risk:

  • Maybe they’ve triggered internal alarms where they’ve shared potential PII with others over cleartext.
  • Maybe they’ve joined chat channels known to be frequented by some of the game’s less upstanding citizens.

Either way, we want to make sure they understand the risks of what they’re doing and who they’re dealing with.

Pinpoint provides various robust methods to import user contact and personalization data in specific formats, and once Pinpoint has ingested that data, you can use Campaigns or Journeys to send customized and personalized messaging to your cohort members – either via automation, or manually via the Pinpoint Console.

Architecture overview

In this architecture, you’ll create a simple Amazon DynamoDB table that mimics a game studio’s database of record for its customers. You’ll then create a Trigger for Amazon Simple Storage Service (Amazon S3) bucket that will ingest the Cohort Modeler extract (created in the prior blog post) and convert it into a CSV file that Pinpoint can ingest. Lastly, once generated, the AWS Lambda function will prompt Pinpoint to automatically ingest the CSV as a static segment.

Once the automation is complete, you’ll use Pinpoint’s console to quickly and easily create a Campaign, including an HTML mail template, to the imported segment of players you identified as at risk via the Cohort Modeler.

Prerequisites

At this point, you should have completed the steps in the prior blog post, Extending the Cohort Modeler. This is all you’ll need to proceed.

Walkthrough

Messaging your Cohort

Now that we’ve extended the Cohort Modeler and built a way to extract cohort data into an S3 bucket, we’ll transform that data into a Segment in Pinpoint, and use the Pinpoint Console to send a message to the members of the Cohort via a Pinpoint Campaign. In this walkthrough, you’ll:

  • Create a Pinpoint Project to import your Cohort Segments.
  • Create a Dynamo table to emulate your database of record for your players.
  • Create an S3 bucket to hold the cohort contact data CSV file.
  • Create a Lambda trigger to respond to Cohort Modeler export events and kick off Pinpoint import jobs.
  • Create and send a Pinpoint Campaign using the imported Segment.

Create the Pinpoint Project

You’ll need a Pinpoint Project (sometimes referred to as an “App”) to send messaging to your cohort members, so navigate to the Pinpoint console and click Create a Project.

  • Sign in to the AWS Management Console and open the Pinpoint Console.
  • If this is your first time using Amazon Pinpoint, you will see a page that introduces you to the features of the service. In the Get started section, you’ll need to enter the name you want to call your project. We used ‘CohortModelerPinpoint‘ but you can use whatever you’d like.
  • On the following screen, the Configure features page, you’ll want to choose Configure in the Email section.
    • Pinpoint will ask you for an email address you want to validate, so that when email goes out, it will use your email address as the FROM header in your email. Enter the email address you want to use as your sending address, and Choose Verify email address.
    • Check the inbox of the address that you entered and look for an email from [email protected]. Open the email and click the link in the email to complete the verification process for the email address.
    • Note: Once you have verified your email identity, you may receive an alert prompting you to update your email address’ policy. If so, highlight your email under All identities, and choose Update policy. To complete this update, Enter confirm where requested, and choose Update.

  • Later on, when you’re asked for your Pinpoint Project ID, this can accessed by choosing All projects from the Pinpoint navigation pane. From there, next to your project name, you will see the associated Project ID.

Create the Dynamo Table

For this step, you’re emulating a game studio’s database of record for its players, and therefore the Lambda function that you’re creating, (to merge Cohort Modeler data with the database of record) is also an emulation.

In a real-world situation, you would use the same ingestion method as the S3TriggerCohortIngest.py example that will be created further below. However, instead of using placeholder data, you would use the ‘playerId’ information extracted from the Cohort Modeler. This would allow you to formulate a specific query against your main database, whether it requires an SQL statement, or some other type of database query.

Creating the Table

Navigate to the DynamoDB Console. You’re going to create a table with ‘playerId’ as the Primary key, and four additional attributes: email, favorite role, first name, and last name.

  • In the navigation pane, choose Tables. On the next page, in the Tables section, choose Create table.
  • In the Table details section, we entered userdata for our Table name. (In order to maintain simple compatibility with the scripts that follow, it is recommended that you do the same.)
  • For Partition key, enter playerId and leave the data type as String.
  • Intentionally leave the Sort key blank and the data type as String.
  • Below, in the Table settings section, leave everything at their Default settings value.
  • Scroll to the end of the page and choose Create table.
Adding Synthetic Data

You’ll need some synthetic data in the database, so that your Cohort Modeler-Pinpoint integration can query the database, retrieve contact information, and then import that contact information into Pinpoint as a Segment.

  • From the DynamoDB Tables section, choose your newly created Table by selecting its name. (The name preferably being userdata).
  • In the DynamoDB navigation pane, choose Explore items.
  • From the Items returned section, choose Create item.
  • Once on the Create item page, ensure that the Form view is highlighted and not the JSON view. You’re going to create a new entry in the table. Cohort Modeler creates the same synthetic information each time it’s built, so all you need to do is to create three entries.
    • For the first entry, enter wayne96 as the Value for playerID.
    • Select the Add new attribute dropdown, and choose String.
    • Enter email as the Attribute name, and the Value should be your own email address since you’ll be receiving this email. This should be the same email used to configure your Pinpoint project from earlier.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter favoriteRole as the Attribute name, and enter Tank as the attribute’s Value.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter firstName as the Attribute name, and enter Wayne as the attribute’s Value.
    • Finally, select the Add new attribute dropdown, and choose String.
    • And enter the lastName as the Attribute name, and enter Johnson as the attribute’s value.

  • Repeat the process for the following two users. You’ll be using the SES Mailbox Simulator on these player IDs – one will simulate a successful delivery (but no opens or clicks), and the other will simulate a bounce notification, which represents an unknown user response code.

 

A B C D E
1 playerId email favoriteRole firstName lastName
2 xortiz [email protected] Healer Tristan Nguyen
3 msmith [email protected] DPS Brett Ezell

Now that the table’s populated, you can build the integration between Cohort Modeler and your new “database of record,” allowing you to use the cohort data to send messages to your players.

Create the Pinpoint Import S3 Bucket

Pinpoint requires a CSV or JSON file stored on S3 to run an Import Segment job, so we’ll need a bucket (separate from our Cohort Modeler Export bucket) to facilitate this.

  • Navigate to the S3 Console, and inside the Buckets section, choose Create Bucket.
  • In the General configuration section, enter a bucket a name, remembering that its name must be unique across all of AWS.
  • You can leave all other settings at their default values, so scroll down to the bottom of the page and choose Create Bucket. Remember the name – We’ll be referring to it as your “Pinpoint import bucket” from here on out.
Create a Pinpoint Role for the S3 Bucket

Before creating the Lambda function, we need to create a role that allows the Cohort Modeler data to be imported into Amazon Pinpoint in the form of a segment.

For more details on how to create an IAM role to allow Amazon Pinpoint to import endpoints from the S3 Bucket, refer to this documentation. Otherwise, you can follow the instructions below:

  • Navigate to the IAM Dashboard. In the navigation pane, under Access management, choose Roles, followed by Create role.
  • Once on the Select trusted entity page, highlight and select AWS service, under the Trusted entity type section.
  • In the Use case section dropdown, type or select S3. Once selected, ensure that S3 is highlighted, and not S3 Batch Operations. Choose, Next.
  • From the Add permissions page, enter AmazonS3ReadOnlyAccess within Search area. Select the associated checkbox and choose Next.
  • Once on the Name, review, and create page, For Role name, enter PinpointSegmentImport. 
  • Scroll down and choose Create role.
  • From the navigation pane, and once again under Access management, choose Roles. Select the name of the role just created.
  • In the Trust relationships tab, choose Edit trust policy.
  • Paste the following JSON trust policy. Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
                "Service": "pinpoint.amazonaws.com"
            },
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "accountId"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
                }
            }
        }
    ]
}

Build the Lambda

You’ll need to create a Lambda function for S3 to trigger when Cohort Modeler drops its export files into the export bucket, as well as the connection to the Cohort Modeler export bucket to set up the trigger. The steps below will take you through the process.

Create the Lambda

Head to the Lambda service menu, and from Functions page, choose Create function. From there:

  • On the Create function page, select Author from scratch.
  • For Function Name, enter S3TriggerCohortIngest for consistency.
  • For Runtime choose Python 3.8
  • No other complex configuration options are needed, so leave the remaining options as default and click Create function.
  • In the Code tab, replace the sample code with the code below.
import json
import os
import uuid
import urllib

import boto3
from botocore.exceptions import ClientError

### S3TriggerCohortIngest

# We get activated once we're triggered by an S3 file getting Put.
# We then:
# - grab the file from S3 and ingest it.
# - negotiate with a DB of record (Dynamo in our test case) to pull the corresponding player data.
# - transform that record data into a format Pinpoint will interpret.
# - Save that CSV into a different S3 bucket, and
# - Instruct Pinpoint to ingest it as a Segment.


# save the CSV file to a random unique filename in S3
def save_s3_file(content):
    
    # generate a random uuid csv filename.
    fname = str(uuid.uuid4()) + ".csv"
    
    print("Saving data to file: " + fname)
    
    try:
        # grab the S3 bucket name
        s3_bucket_name = os.environ['S3BucketName']
        
        # Set up the S3 boto client
        s3 = boto3.resource('s3')
        
        # Lob the body into the object.
        object = s3.Object(s3_bucket_name, fname)
        object.put(Body=content)
        
        return fname
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't store file in S3: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to storage.')
        }
        
# Given a list of users, query the user dynamo db for their account info.
def query_dynamo(userlist):
    
    # set up the dynamo client.
    ddb_client = boto3.resource('dynamodb')
    
    # Set up the RequestIems object for our query.
    batch_keys = {
        'userdata': {
            'Keys': [{'playerId': user} for user in userlist]
        }
    }

    # query for the keys. note: currently no explicit error-checking for <= 100 items.     
    try:        
 
        db_response = ddb_client.batch_get_item(RequestItems=batch_keys)
 
 
     
        return db_response
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't access data in DynamoDB: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to db.')
        }
        
def ingest_pinpoint(filename):
    
    s3url = "s3://" + os.environ.get('S3BucketName') + "/" + filename
    
    
    try:
        pinClient = boto3.client('pinpoint')
        
        response = pinClient.create_import_job(
            ApplicationId=os.environ.get('PinpointApplicationID'),
            ImportJobRequest={
                'DefineSegment': True,
                'Format': 'CSV',
                'RegisterEndpoints': True,
                'RoleArn': 'arn:aws:iam::744969268958:role/PinpointSegmentImport',
                'S3Url': s3url,
                'SegmentName': filename
            }
        )
        
        return {
            'ImportId': response['ImportJobResponse']['Id'],
            'SegmentId': response['ImportJobResponse']['Definition']['SegmentId'],
            'ExternalId': response['ImportJobResponse']['Definition']['ExternalId'],
        }
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't create Import job for Pinpoint: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed segment import to Pinpoint.')
        }
        
# Lambda entry point GO
def lambda_handler(event, context):
    
    # Get the bucket + obj name from the incoming event
    incoming_bucket = event['Records'][0]['s3']['bucket']['name']
    filename = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
    
    # light up the S3 client
    s3 = boto3.resource('s3')
    
    # grab the file that triggered us
    try:
        content_object = s3.Object(incoming_bucket, filename)
        file_content = content_object.get()['Body'].read().decode('utf-8')
        
        # and turn it into JSON.
        json_content = json.loads(file_content)
        
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(filename, incoming_bucket))
        raise e

    # Munge the file we got into something we can actually use
    record_content = json.dumps(json_content)

    # load it into json
    record_json = json.loads(record_content)
    
    # Initialize an empty list for names
    namelist = []
    
    # Iterate through the records in the list
    for record in record_json:
        # Check if "playerId" key exists in the record
        if "playerId" in record:
            # Append the first element of "playerId" list to namelist
            namelist.append(record["playerId"][0])

    # use the name list and grab the corresponding users from the dynamo table
    userdatalist = query_dynamo(namelist)
    
    # grab just what we need to create our import file
    userdata_responses = userdatalist["Responses"]["userdata"]
    
    csvlist = "ChannelType,Address,User.UserId,User.UserAttributes.FirstName,User.UserAttributes.LastName\n"
    
    for user in userdata_responses:
        newString = "EMAIL," + user["email"] + "," + user["playerId"] + "," + user["firstName"] + "," + user["lastName"] + "\n"
        csvlist += newString
        
    # Dump it to S3 with a unique filename. 
    csvFile = save_s3_file(csvlist)

    # and tell Pinpoint to import it as a Segment.
    pinResponse = ingest_pinpoint(csvFile)
    
    return {
        'statusCode': 200,
        'body': json.dumps(pinResponse)
    }

Configure the Lambda

Firstly, you’ll need to raise the function timeout, because sometimes it will take time to import large Pinpoint segments. To do so, navigate to the Configuration tab, then General configuration and change the Timeout value to the maximum of 15 minutes.

Next, select Environment variables beneath General configuration in the navigation pane. Choose Edit, followed by Add environment variable, for each Key and Value below.

  • Create a key – DynamoUserTableName – and give it the name of the DynamoDB table you built in the previous step. (If following our recommendations, it would be userdata. )
  • Create a key – PinpointApplicationID – and give it the Project ID (not the name), of the Pinpoint Project you created in the first step.
  • Create a key – S3BucketName – and give it the name of the Pinpoint Import S3 Bucket.
  • Finally, create a key – PinpointS3RoleARN – and paste the ARN of the Pinpoint S3 role you created during the Import Bucket creation step.
  • Once all Environment Variables are entered, choose Save.

In a production build, you could have this information stored in System Manager Parameter Store, in order to ensure portability and resilience.

While still in the Configuration tab, from the navigation pane, choose the Permissions menu option.

  • Note that just beneath Execution role, AWS has created an IAM Role for the Lambda. Select the role’s name to view it in the IAM console.
  • On the Role’s page, in the Permissions tab and within the Permissions policies section, you should see one policy attached to the role: AWSLambdaBasicExecutionRole
  • You will need to give the Lambda access to your Pinpoint import bucket, so highlight the Policy name and select the Add permissions dropdown and choose Create inline policy – we won’t be needing this role anywhere else.
  • On the next screen, click the JSON tab.
    • Paste the following IAM Policy JSON:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "dynamodb:BatchGetItem",
            "Resource": "arn:aws:dynamodb:region:accountId:table/userdata"
        },
        {
            "Effect": "Allow",
            "Action": "mobiletargeting:CreateImportJob",
            "Resource": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::accountId:role/PinpointSegmentImport"
        }
    ]
}
    • Replace the placeholder YOUR-CM-BUCKET-NAME-HERE with the name of the S3 Bucket you created in the previous blog post to store, and the YOUR-PINPOINT-BUCKET-NAME-HERE with the bucket to store Amazon Pinpoint segment endpoint you created earlier in the blog.
    • Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
    • Choose Review Policy.
    • Give the policy a name – we used S3TriggerCohortIngestPolicy.
    • Finally, choose Create Policy.
Trigger the Lambda via S3

The goal is for the Lambda to be triggered when Cohort Modeler drops the extract file into its designated S3 delivery bucket. Fortunately, setting this up is a simple process:

  • Navigate back to the Lambda Functions page. For this particular Lambda script S3TriggerCohortIngest, choose the + Add trigger from the Function overview section.
    • From the Trigger configuration dropdown, select S3 as the source.
    • Under Bucket, enter or select the bucket you’ve chosen for Cohort Modeler extract delivery. (Created in the previous blog.)
    • Leave Event type as “All object create events
    • Leave both Prefix and Suffix blank.
    • Check the box that acknowledges that using the same bucket for input and output is not recommended, as it can increase Lambda usage and thereby costs.
    • Finally, choose Add.
    • Lambda will add the appropriate permissions to invoke the trigger when an object is created in the S3 bucket.
Test the Lambda

The best way to test the end to end process is to simply connect to the endpoint you created in the first step of the process and send it a valid query. I personally use Postman, but you can use curl or any other HTTP tool to send the request.

Again, refer back to your prior work to determine the HTTP API endpoint for your Cohort Modeler’s cohort extract endpoint, and then send it the following query:

https://YOUR-ENDPOINT.execute-api.YOUR-REGION.amazonaws.com/Prod/data/cohort/ea_atrisk?threshold=2

You should receive back a response that looks something like this:

{'statusCode': 200, 'body': 'export/ea_atrisk_2_2023-09-12_13-57-06.json'}

The Status code confirms that the request was successful, and the body provides the name of the export file which was created.

  • From the AWS console, navigate to the S3 Dashboard, and select the S3 Bucket you assigned to Cohort Modeler exports. You should see a JSON file corresponding to the response from your API call in that bucket.
  • Still in S3, navigate back and select the S3 bucket you assigned as your Pinpoint Import bucket. You should find a CSV file with the same file prefix in that bucket.
  • Finally, navigate to the Pinpoint dashboard and choose your Project.
  • From the navigation pane, select Segments. You should see a segment name which directly corresponds to the CSV file which you located in the Pinpoint Import bucket.

If these three steps are complete, then the outbound arm of the Community Engagement Flywheel is functional. All that’s left now is to test the Segment by using it in a Campaign.

Create an email template

In order to send your message recipients a message, you’ll need a message template. In this section, we’ll walk you through this process. The Pinpoint Template Editor is a simple HTML editor, but other third-party services like visual designers, can integrate directly with Pinpoint to provide a seamless integration between the design tool and Pinpoint.

  • From the navigation pane of the Pinpoint console, choose Message templates, and then select Create template.
  • Leave the Channel set to Email, and under Template name, enter a unique and memorable name.
  • Under Subject – We entered and used ‘Happy Video Game Day!’, but enter and use whatever you would like.
  • Locate and copy the contents of EmailTemplate.html, and paste the contents into the Message section of the form.
  • Finally, choose Create, and your Template will now be available for use.

Create & Send the Pinpoint Campaign

For the final step, you will create and send a campaign to the endpoints included in the Segment that the Community Engagement Flywheel created. Earlier, you mapped three email addresses to the identities that Cohort Modeler generated for your query: your email, and two test emails from the SES Email Simulator. As a result, you should receive one email to the email address you selected when you’ve completed this process, as well as events which indicate the status of all campaign activities.

  • In the navigation bar of the Pinpoint console, choose All projects, and select the project you’ve created for this exercise.
  • From the navigation pane, choose Campaigns, and then Create a campaign at the top of the page.
  • On the Create a campaign page, give your campaign a name, highlight Standard campaign, and choose Email for the Channel. To proceed, choose Next.
  • On the Choose a segment page, highlight Use an existing segment, and from the Segment dropdown, select the segment .csv that was created earlier. Once selected, choose Next.
  • On the Create your message page, you have two tasks:
    • You’re going to use the email template you created in the prior step, so in the Email template section, under Template name, select Choose a template, followed by the template you created, and finally Choose template.
    • In the Email settings section, ensure you’ve selected the sender email address you verified previously when you initially created the Pinpoint project.
    • Choose Next.
  • On the Choose when to send the campaign page, ensure Immediately is highlighted for when you want the campaign to be sent. Scroll down and choose Next.
  • Finally, on the Review and launch page, verify your selections as you scroll down the page, and finally Launch campaign.

Check your inbox! You will shortly receive the email, and this confirms the Campaign has been successfully sent.

Conclusion

So far you’ve extended the Cohort Modeler to report on the cohorts it’s built for you, you’ve operated on that extract and built an ETL machine to turn that cohort into relevant contact and personalization data, you’ve imported the contact data into Pinpoint as a static Segment, and you’ve created a Pinpoint Campaign witih that Segment to send messaging to that Cohort.

In the next and final blog post, we’ll show how to respond to events that result from your cohort members interacting with the messaging they’ve been sent, and how to enrich the cohort data with those events so you can understand in deeper detail how your messaging works – or doesn’t work – with your cohort members.

Related Content

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Brett Ezell

Brett Ezell

Brett Ezell is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. As a Navy veteran, he joined AWS in 2020 through an AWS technical military apprenticeship program. When he isn’t deep diving into solutions for customer challenges, Brett spends his time collecting vinyl, attending live music, and training at the gym. An admitted comic book nerd, he feeds his addiction every Wednesday by combing through his local shop for new books.

Build Better Engagement using the AWS Community Engagement Flywheel: Part 1 of 3

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/build-better-engagement-using-the-aws-community-engagement-flywheel-part-1-of-3/

Introduction

Part 1 of 3: Extending the Cohort Modeler

Businesses are constantly looking for better ways to engage with customer communities, but it’s hard to do when profile data is limited to user-completed form input or messaging campaign interaction metrics. Neither of these data sources tell a business much about their customer’s interests or preferences when they’re engaging with that community.

To bridge this gap for their community of customers, AWS Game Tech created the Cohort Modeler: a deployable solution for developers to map out and classify player relationships and identify like behavior within a player base. Additionally, the Cohort Modeler allows customers to aggregate and categorize player metrics by leveraging behavioral science and customer data.

In this series of three blog posts, you’ll learn how to:

  1. Extend the Cohort Modeler’s functionality to provide reporting functionality.
  2. Use Amazon Pinpoint, the Digital User Engagement Events Database (DUE Events Database), and the Cohort Modeler together to group your customers into cohorts based on that data.
  3. Interact with them through automation to send meaningful messaging to them.
  4. Enrich their behavioral profiles via their interaction with your messaging.

In this blog post, we’ll show how to extend Cohort Modeler’s functionality to include and provide cohort reporting and extraction.

Use Case Examples for The Cohort Modeler

For this example, we’re going to retrieve a cohort of individuals from our Cohort Modeler who we’ve identified as at risk:

  • Maybe they’ve triggered internal alarms where they’ve shared potential PII with others over cleartext
  • Maybe they’ve joined chat channels known to be frequented by some of the game’s less upstanding citizens.

Either way, we want to make sure they understand the risks of what they’re doing and who they’re dealing with.

Because the Cohort Modeler’s API automatically translates the data it’s provided into the graph data format, the request we’re making is an easy one: we’re simply asking CM to retrieve all of the player IDs where the player’s ea_atrisk attribute value is greater than 2.

In our case, that either means

  1. They’ve shared PII at least twice, or shared PII at least once.
  2. Joined the #give-me-your-credit-card chat channel, which is frequented by real-life scammers.

These are currently the only two activities which generate at-risk data in our example model.

Architecture overview

In this example, you’ll extend Cohort Modeler’s functionality by creating a new API resource and method, and test that functional extension to verify it’s working. This supports our use case by providing a studio with a mechanism to identify the cohort of users who have engaged in activities that may put them at risk for fraud or malicious targeting.

CohortModelerExtensionArchitecture

Prerequisites

This blog post series integrates two tech stacks: the Cohort Modeler and the Digital User Engagement Events Database, both of which you’ll need to install. In addition to setting up your environment, you’ll need to clone the Community Engagement Flywheel repository, which contains the scripts you’ll need to use to integrate Cohort Modeler and Pinpoint.

You should have the following prerequisites:

Walkthrough

Extending the Cohort Modeler

In order to meet our functional requirements, we’ll need to extend the Cohort Modeler API. This first part will walk you through the mechanisms to do so. In this walkthrough, you’ll:

  • Create an Amazon Simple Storage Service (Amazon S3) bucket to accept exports from the Cohort Modeler
  • Create an AWS Lambda Layer to support Python operations for Cohort Modeler’s Gremlin interface to the Amazon Neptune database
  • Build a Lambda function to respond to API calls requesting cohort data, and
  • Integrate the Lambda with the Amazon API Gateway.

The S3 Export Bucket

Normally it’d be enough to just create the S3 Bucket, but because our Cohort Modeler operates inside an Amazon Virtual Private Cloud (VPC), we need to both create the bucket and create an interface endpoint.

Create the Bucket

The size of a Cohort Modeler extract could be considerable depending on the size of a cohort, so it’s a best practice to deliver the extract to an S3 bucket. All you need to do in this step is create a new S3 bucket for Cohort Modeler exports.

  • Navigate to the S3 Console page, and inside the main pane, choose Create Bucket.
  • In the General configuration section, enter a bucket a name, remembering that its name must be unique across all of AWS.
  • You can leave all other settings at their default values, so scroll down to the bottom of the page and choose Create Bucket. Remember the name – I’ll be referring to it as your “CM export bucket” from here on out.

Create S3 Gateway endpoint

When accessing “global” services, like S3 (as opposed to VPC services, like EC2) from inside a private VPC, you need to create an Endpoint for that service inside the VPC. For more information on how Gateway Endpoints for Amazon S3 work, refer to this documentation.

  • Open the Amazon VPC console.
  • In the navigation pane, under Virtual private cloud, choose Endpoints.
  • In the Endpoints pane, choose Create endpoint.
  • In the Endpoint settings section, under Service category, select AWS services.
  • In the Services section, under find resources by attribute, choose Type, and select the filter Type: Gateway and select com.amazonaws.region.s3.
  • For VPC section, select the VPC in which to create the endpoint.
  • For Route tables, section, select the route tables to be used by the endpoint. We automatically add a route that points traffic destined for the service to the endpoint network interface.
  • In the Policy section, select Full access to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, select Custom to attach a VPC endpoint policy that controls the permissions that principals have to perform actions on resources over the VPC endpoint.
  • (Optional) To add a tag, choose Add new tag in the Tags section and enter the tag key and the tag value.
  • Choose Create endpoint.

Create the VPC Endpoint Security Group

When accessing “global” services, like S3 (as opposed to VPC services, like EC2) from inside a private VPC, you need to create an Endpoint for that service inside the VPC. One of the things the Endpoint needs to know is what network interfaces to accept connections from – so we’ll need to create a Security Group to establish that trust.

  • Navigate to the Amazon VPC console and In the navigation pane, under Security, choose Security groups.
  • In the Security Groups pane choose Create security group.
  • Under the Basic details section, name your security group S3 Endpoint SG.
  • Under the Outbound Rules section, choose Add Rule.
    • Under Type, select All traffic.
    • Under Source, leave Custom selected.
    • For the Custom Source, open the dropdown and choose the S3 gateway endpoint (this should be named pl-63a5400a)
    • Repeat the process for Outbound rules.
    • When finished, choose Create security group

Creating a Lambda Layer

You can use the code as provided in a Lambda, but the gremlin libraries required for it to run are another story: gremlin_python doesn’t come as part of the default Lambda libraries. There are two ways to address this:

  • You can upload the libraries with the code in a .zip file; this will work, but it will mean the Lambda isn’t editable via the built-in editor, which isn’t a scalable technique (and makes debugging quick changes a real chore).
  • You can create a Lambda Layer, upload those libraries separately, and then associate them with the Lambda you’re creating.

The Layer is a best practice, so that’s what we’re going to do here.

Creating the zip file

In Python, you’ll need to upload a .zip file to the Layer, and all of your libraries need to be included in paths within the /python directory (inside the zip file) to be accessible. Use pip to install the libraries you need into a blank directory so you can zip up only what you need, and no more.

  • Create a new subdirectory in your user directory,
  • Create a /python subdirectory,
  • Invoke pip3 with the —target option:
pip install --target=./python gremlinpython

Ensure that you’re zipping the python folder, the resultant file should be named python.zip and extracts to a python folder.

Creating the Layer

Head to the Lambda console, and select the Layers menu option from the AWS Lambda navigation pane. From there:

  • Choose Create layer in the Layer’s section
  • Give it a relevant name – like gremlinpython .
  • Select Upload a .zip file and upload the zip file you just created
  • For Compatible architectures, select x86_64.
  • Select the Python 3.8 as your runtime,
  • Choose Create.

Assuming all steps have been followed, you’ll receive a message that the layer has been successfully created.

Building the Lambda

You’ll be extending the Cohort Modeler with new functionality, and the way CM manages its functionality is via microservice-based Lambdas. You’ll be building a new API: to query the CM and extract Cohort information to S3.

Create the Lambda

Head back to the Lambda service menu, in the Resources for (your region) section, choose Create Function. From there:

  • On the Create function page select Author from scratch.
  • For Function Name enter ApiCohortGet for consistency.
  • For Runtime choose Python 3.8.
  • For Architectures, select x86_64.
  • Under the Advanced Settings pane select Enable VPC – you’re going to need this Lambda to query Cohort Modeler’s Neptune database, which has VPC endpoints.
    • Under VPC select the VPC created by the Cohort Modeler installation process.
    • Select all subnets in the VPC.
    • Select the security group labeled as the Security Group for API Lambda functions (also installed by CM)
    • Furthermore, select the security group S3 Endpoint SG we created, this allows the Lambda function hosted inside the VPC to access the S3 bucket.
  • Choose Create Function.
  • In the Code tab, and within the Code source window, delete all of the sample code and replace it with the code below. This python script will allow you to query Cohort Modeler for cohort extracts.
import os
import json
import boto3
from datetime import datetime
from gremlin_python import statics
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.driver.protocol import GremlinServerError
from gremlin_python.driver import serializer
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.process.traversal import T, P
from aiohttp.client_exceptions import ClientConnectorError
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3 = boto3.client('s3')

def query(g, cohort, thresh):
    return (g.V().hasLabel('player')
            .has(cohort, P.gt(thresh))
            .valueMap("playerId", cohort)
            .toList())

def doQuery(g, cohort, thresh):
    return query(g, cohort, thresh)

# Lambda handler
def lambda_handler(event, context):
    
    # Connection instantiation
    conn = create_remote_connection()
    g = create_graph_traversal_source(conn)
    try:
        # Validate the cohort info here if needed.

        # Grab the event resource, method, and parameters.
        resource = event["resource"]
        method = event["httpMethod"]
        pathParameters = event["pathParameters"]

        # Grab query parameters. We should have two: cohort and threshold
        queryParameters = event.get("queryStringParameters", {})

        cohort_val = pathParameters.get("cohort")
        thresh_val = int(queryParameters.get("threshold", 0))

        result = doQuery(g, cohort_val, thresh_val)

        
        # Convert result to JSON
        result_json = json.dumps(result)
        
        # Generate the current timestamp in the format YYYY-MM-DD_HH-MM-SS
        current_timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
        
        # Create the S3 key with the timestamp
        s3_key = f"export/{cohort_val}_{thresh_val}_{current_timestamp}.json"

        # Upload to S3
        s3_result = s3.put_object(
            Bucket=os.environ['S3ExportBucket'],
            Key=s3_key,
            Body=result_json,
            ContentType="application/json"
        )
        response = {
            'statusCode': 200,
            'body': s3_key
        }
        return response

    except Exception as e:
        logger.error(f"Error occurred: {e}")
        return {
            'statusCode': 500,
            'body': str(e)
        }

    finally:
        conn.close()

# Connection management
def create_graph_traversal_source(conn):
    return traversal().withRemote(conn)

def create_remote_connection():
    database_url = 'wss://{}:{}/gremlin'.format(os.environ['NeptuneEndpoint'], 8182)
    return DriverRemoteConnection(
        database_url,
        'g',
        pool_size=1,
        message_serializer=serializer.GraphSONSerializersV2d0()
    )

Configure the Lambda

Head back to the Lambda service page, and fom the navigation pane, select Functions.  In the Functions section select ApiCohortGet from the list.

  • In the Function overview section, select the Layers icon beneath your Lambda name.
  • In the Layers section, choose Add a layer.
  • From the Choose a layer section, select Layer Source to Custom layers.
  • From the dropdown menu below, select your recently custom layer, gremlinpython.
  • For Version, select the appropriate (probably the highest, or most recent) version.
  • Once finished, choose Add.

Now, underneath the Function overview, navigate to the Configuration tab and choose Environment variables from the navigation pane.

  • Now choose edit to create a new variable. For the key, enter NeptuneEndpoint , and give it the value of the Cohort Modeler’s Neptune Database endpoint. This value is available from the Neptune control panel under Databases. This should not be the read-only cluster endpoint, so select the ‘writer’ type. Once selected, the Endpoint URL will be listed beneath the Connectivity & security tab
  • Create an additional new key titled,  S3ExportBucket and for the value use the unique name of the S3 bucket you created earlier to receive extracts from Cohort Modeler. Once complete, choose save
  • In a production build, you can have this information stored in System Manager Parameter Store in order to ensure portability and resilience.

While still in the Configuration tab, under the navigation pane choose Permissions.

  • Note that AWS has created an IAM Role for the Lambda. select the role name to view it in the IAM console.
  • Under the Permissions tab, in the Permisions policies section, there should be two policies attached to the role: AWSLambdaBasicExecutionRole and AWSLambdaVPCAccessExecutionRole.
  • You’ll need to give the Lambda access to your CM export bucket
  • Also in the Permissions policies section, choose the Add permissions dropdown and select Create Inline policy – we won’t be needing this role anywhere else.
  • On the new page, choose the JSON tab.
    • Delete all of the sample code within the Policy editor, and paste the inline policy below into the text area.
    • {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "s3:*",
                  "Resource": [
                      "arn:aws:s3:::YOUR-S3-BUCKET-NAME-HERE",
                      "arn:aws:s3:::YOUR-S3-BUCKET-NAME-HERE /*"
                  ]
              }
          ]
      }
  • Replace the placeholder YOUR-S3-BUCKET-NAME-HERE with the name of your CM export bucket.
  • Click Review Policy.
  • Give the policy a name – I used ApiCohortGetS3Policy.
  • Click Create Policy.

Integrating with API Gateway

Now you’ll need to establish the API Gateway that the Cohort Modeler created with the new Lambda functions that you just created. If you’re on the old console User Interface, we strongly recommend switching over to the new console UI. This is due to the previous UI being deprecated by the 30th of October 2023. Consequently, the following instructions will apply to the new console UI.

  • Navigate to the main service page for API Gateway.
  • From the navigation pane, choose Use the new console.

APIGatewayNewConsole

Create the Resource

  • From the new console UI, select the name of the API Gateway from the APIs Section that corresponds to the name given when you launched the SAM template.
  • On the Resources navigation pane, choose /data, followed by selecting Create resource.
  • Under Resource name, enter cohort, followed by Create resource.

CreateNewResource

We’re not quite finished. We want to be able to ask the Cohort Modeler to give us a cohort based on a path parameter – so that way when we go to /data/cohort/COHORT-NAME/ we receive back information about the cohort name that we provided. Therefore…

Create the Method

CreateMethod

Now we’ll create the GET Method we’ll use to request cohort data from Cohort Modeler.

  • From the same menu, choose the /data/cohort/{cohort} Resource, followed by selecting Get from the Methods dropdown section, and finally choosing Create Method.
  • From the Create method page, select GET under Method type, and select Lambda function under the Integration type.
  • For the  Lambda proxy integration, turn the toggle switch on.
  • Under Lamba function, choose the function ApiCohortGet, created previously.
  • Finally, choose Create method.
  • API Gateway will prompt and ask for permissions to access the Lambda – this is fine, choose OK.

Create the API Key

You’ll want to access your API securely, so at a minimum you should create an API Key and require it as part of your access model.

CreateAPIKey

  • Under the API Gateway navigation pane, choose APIs. From there, select API Keys, also under the navigation pane.
  • In the API keys section, choose Create API key.
  • On the Create API key page, enter your API Key name, while leaving the remaining fields at their default values. Choose Save to complete.
  • Returning to the API keys section, select and copy the link for the API key which was generated.
  • Once again, select APIs from the navigation menu, and continue again by selecting the link to your CM API from the list.
  • From the navigation pane, choose API settings, folded under your API name, and not the Settings option at the bottom of the tab.

  • In the API details section, choose Edit under API details. Once on the Edit API settings page, ensure the Header option is selected under API key source.

Deploy the API

Now that you’ve made your changes, you’ll want to deploy the API with the new endpoint enabled.

  • Back in the navigation pane, under your CM API’s dropdown menu, choose Resources.
  • On the Resources page for your CM API, choose Deploy API.
  • Select the Prod stage (or create a new stage name for testing) and click Deploy.

Test the API

When the API has deployed, the system will display your API’s URL. You should now be able to test your new Cohort Modeler API:

  • Using your favorite tool (curl, Postman, etc.) create a new request to your API’s URL.
    • The URL should look like https://randchars.execute-api.us-east-1.amazonaws.com/Stagename. You can retrieve your APIGateway endpoint URL by selecting API Settings, in the navigation pane of your CM API’s dropdown menu.
    • From the API settings page, under Default endpoint, will see your Active APIGateway endpoint URL. Remember to add the Stagename (for example, “Prod) at the end of the URL.

    • Be sure you’re adding a header named X-API-Key to the request, and give it the value of the API key you created earlier.
    • Add the /data/cohort resource to the end of the URL to access the new endpoint.
    • Add /ea_atrisk after /data/cohort – you’re querying for the cohort of players who belong to the at-risk cohort.
    • Finally, add ?threshold=2 so that we’re only looking at players whose cohort value (in this case, the number of times they’ve shared personally identifiable information) is greater than 2. The final URL should look something like: https://randchars.execute-api.us-east-1.amazonaws.com/Stagename/data/cohort/ea_atrisk?threshold=2
  • Once you’ve submitted the query, your response should look like this:
{'statusCode': 200, 'body': 'export/ea_atrisk_2_2023-09-12_13-57-06.json'}

The status code indicates a successful query, and the body indicates the name of the json file in your extract S3 bucket which contains the cohort information. The name comprises of the attribute, the threshold level and the time the export was made. Go ahead and navigate to the S3 bucket, find the file, and download it to see what Cohort Modeler has found for you.

Troubleshooting

Installing the Game Tech Cohort Modeler

  • Error: Could not find public.ecr.aws/sam/build-python3.8:latest-x86_64 image locally and failed to pull it from docker
    • Try: docker logout public.ecr.aws.
    • Attempt to pull the docker image locally first: docker pull public.ecr.aws/sam/build-python3.8:latest-x86_64
  • Error: RDS does not support creating a DB instance with the following combination:DBInstanceClass=db.r4.large, Engine=neptune, EngineVersion=1.2.0.2, LicenseModel=amazon-license.
    • The default option r4 family was offered when Neptune was launched in 2018, but now newer instance types offer much better price/performance. As of engine version 1.1.0.0, Neptune no longer supports r4 instance types.
    • Therefore, we recommend choosing another Neptune instance based on your needs, as detailed on this page.
      • For testing and development, you can consider the t3.medium and t4g.medium instances, which are eligible for Neptune free-tier offer.
      • Remember to add the instance type that you want to use in the AllowedValues attributes of the DBInstanceClass and rebuilt using sam build –use-container

Using the data gen script (for automated data generation)

  • The cohort modeler deployment does not deploy the CohortModelerGraphGenerator.ipynb which is required for dummy data generation as a default.
  • You will need to login to your Sagemaker instance and upload the  CohortModelerGraphGenerator.ipynb file and run through the cells to generate the dummy data into your S3 bucket.
  • Finally, you’ll need to follow the instructions in this page to load the dummy data from Amazon S3 into your Neptune instance.
    • For the IAM role for Amazon Neptune to load data from Amazon S3, the stack should have created a role with the name Cohort-neptune-iam-role-gametech-modeler.
    • You can run the requests script from your jupyter notebook instance, since it already has access to the Amazon Neptune endpoint. The python script should look like below:
import requests
import json

url = 'https://<NeptuneEndpointURL>:8182/loader'

headers = {
    'Content-Type': 'application/json'
}

data = {
    "source": "<S3FileURI>",
    "format": "csv",
    "iamRoleArn": "NeptuneIAMRoleARN",
    "region": "us-east-1",
    "failOnError": "FALSE",
    "parallelism": "MEDIUM",
    "updateSingleCardinalityProperties": "FALSE",
    "queueRequest": "TRUE"
}

response = requests.post(url, headers=headers, data=json.dumps(data))

print(response.text)

    • Remember to replace the NeptuneEndpointURL, S3FileURI, and NeptuneIAMRoleARN.
    • Remember to load user_vertices.csv, campaign_vertices.csv, action_vertices.csv, interaction_edges.csv, engagement_edges.csv, campaign_edges.csv, and campaign_bidirectional_edges.csv in that order.

Conclusion

In this post, you’ve extended the Cohort Modeler to respond to requests for cohort data, by both querying the cohort database and providing an extract in an S3 bucket for future use. In the next post, we’ll demonstrate how creating this file triggers an automated process. This process will identify the players from the cohort in the studio’s database, extract their contact and other personalization data, compiling the data into a CSV file from that request, and import that file into Pinpoint for targeted messaging.

Related Content

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Brett Ezell

Brett Ezell

Brett Ezell is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. As a Navy veteran, he joined AWS in 2020 through an AWS technical military apprenticeship program. When he isn’t deep diving into solutions for customer challenges, Brett spends his time collecting vinyl, attending live music, and training at the gym. An admitted comic book nerd, he feeds his addiction every Wednesday by combing through his local shop for new books.