Friday, July 31, 2020

New – Using Amazon GuardDuty to Protect Your S3 Buckets

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

Via AWS News Blog https://ift.tt/1EusYcK

COVID-19 leads to shocking cloud computing bills

It’s pretty significant when the Wall Street Journal talks about cloud issues, and this story (behind a paywall) is no different. The gist is that as enterprises support a mostly remote workforce with cloud computing, they are, of course, seeing rapid growth in the monthly public cloud bills. 

Although a 20 percent expansion in dollars burned each month is average, I’ve seen expenses go as much as 50 percent higher in month-to-month growth. This is without expanding the number of applications or data—just how the clouds are now being used. 

To read this article in full, please click here

Thursday, July 30, 2020

Announcing the New AWS Community Builders Program!

We continue to be amazed by the enthusiasm for AWS knowledge sharing in technical communities. Many experienced AWS advocates are passionate about helping others build on AWS by sharing their challenges, success stories, and code. Others who are newer to AWS are showing a similar enthusiasm for community building and are asking how they can get more involved in community activities. These builders are seeking better ways to connect with one another, share best practices, and receive resources & mentorship to help improve community knowledge sharing.

To help address these points, we are excited to announce the new AWS Community Builders Program which offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. As of today, this program is open for anyone to apply to join!

Members of the program will receive:

  • Access to AWS product teams and information about new services and features
  • Mentorship from AWS subject matter experts on a variety of topics, including content creation, community building, and securing speaking engagements
  • AWS Promotional Credits and other helpful resources to support content creation and community-based work

Any individual who is passionate about building on AWS can apply to join the AWS Community Builders program. The application process is open to AWS builders worldwide, and the program seeks applicants from all regions, demographics, and underrepresented communities.

While there is no single specific criteria for being accepted into the program, applications will generally be reviewed for evidence and accuracy of technical content, such as blog posts, open source contributions, presentations, online knowledge sharing, and community organization efforts, such as hosting AWS Community Days, AWS User Groups, or other community-based events. Equally important, the program seeks individuals from diverse backgrounds, who are enthusiastic about getting more involved in these types of activities! The program will accept a limited number of applicants per year.

Please apply to be an AWS Community Builder today. To learn more, you can get connected via a variety of community resources.

Channy and Jason;

Via AWS News Blog https://ift.tt/1EusYcK

Wednesday, July 29, 2020

Amazon Translate now supports Office documents

Whether your organization is a multinational enterprise present in many countries, or a small startup hungry for global success, translating your content to local languages may be an enduring challenge. Indeed, text data often comes in many formats, and processing them may require several different tools. Also, as all these tools may not support the same language pairs, you may have to convert certain documents to intermediate formats, or even resort to manual translation. All these issues add extra cost, and create unnecessary complexity in building consistent and automated translation workflows.

Amazon Translate aims at solving these problems in a simple and cost effective fashion. Using either the AWS console or a single API call, Amazon Translate makes it easy for AWS customers to quickly and accurately translate text in 55 different languages and variants.

Earlier this year, Amazon Translate introduced batch translation for plain text and HTML documents. Today, I’m very happy to announce that batch translation now also supports Office documents, namely .docx, .xlsx and .pptx files as defined by the Office Open XML standard.

Introducing Amazon Translate for Office Documents
The process is extremely simple. As you would expect, source documents have to be stored in an Amazon Simple Storage Service (S3) bucket. Please note that no document may be larger than 20 Megabytes, or have more than 1 million characters.

Each batch translation job processes a single file type and a single source language. Thus, we recommend that you organize your documents in a logical fashion in S3, storing each file type and each language under its own prefix.

Then, using either the AWS console or the StartTextTranslationJob API in one of the AWS language SDKs, you can launch a translation job, passing:

  • the input and output location in S3,
  • the file type,
  • the source and target languages.

Once the job is complete, you can collect translated files at the output location.

Let’s do a quick demo!

Translating Office Documents
Using the S3 console, I first upload a few .docx documents to one of my buckets.

S3 files

Then, moving to the Translate console, I create a new batch translation job, giving it a name, and selecting both the source and target languages.

Creating a batch job

Then, I define the location of my documents in S3, and their format, .docx in this case. Optionally, I could apply a custom terminology, to make sure specific words are translated exactly the way that I want.

Likewise, I define the output location for translated files. Please make sure that this path exists, as Translate will not create it for you.

Creating a batch job

Finally, I set the AWS Identity and Access Management (IAM) role, giving my Translate job the appropriate permissions to access S3. Here, I use an existing role that I created previously, and you can also let Translate create one for you. Then, I click on ‘Create job’ to launch the batch job.

Creating a batch job

The job starts immediately.

Batch job running

A little while later, the job is complete. All three documents have been translated successfully.

Viewing a completed job

Translated files are available at the output location, as visible in the S3 console.

Viewing translated files

Downloading one of the translated files, I can open it and compare it to the original version.

Comparing files

For small scale use, it’s extremely easy to use the AWS console to translate Office files. Of course, you can also use the Translate API to build automated workflows.

Automating Batch Translation
In a previous post, we showed you how to automate batch translation with an AWS Lambda function. You could expand on this example, and add language detection with Amazon Comprehend. For instance, here’s how you could combine the DetectDominantLanguage API with the Python-docx open source library to detect the language of .docx files.

import boto3, docx
from docx import Document

document = Document('blog_post.docx')
text = document.paragraphs[0].text
comprehend = boto3.client('comprehend')
response = comprehend.detect_dominant_language(Text=text)
top_language = response['Languages'][0]
code = top_language['LanguageCode']
score = top_language['Score']
print("%s, %f" % (code,score))

Pretty simple! You could also detect the type of each file based on its extension, and move it to the proper input location in S3. Then, you could schedule a Lambda function with CloudWatch Events to periodically translate files, and send a notification by email. Of course, you could use AWS Step Functions to build more elaborate workflows. Your imagination is the limit!

Getting Started
You can start translating Office documents today in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), and Asia Pacific (Seoul).

If you’ve never tried Amazon Translate, did you know that the free tier offers 2 million characters per month for the first 12 months, starting from your first translation request?

Give it a try, and let us know what you think. We’re looking forward to your feedback: please post it to the AWS Forum for Amazon Translate, or send it to your usual AWS support contacts.

- Julien Via AWS News Blog https://ift.tt/1EusYcK

Tuesday, July 28, 2020

Amazon Fraud Detector is now Generally Available

What was announced?

Amazon Fraud Detector is now Generally Available! 🥳

In case you missed the announcement during 2019 re:Invent week, Amazon Fraud Detector was originally released in preview mode on December 3rd, 2019. But today it is now Generally Available for customers to check out.

What is Amazon Fraud Detector?

Amazon Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts.

Did you know that each year, tens of billions of dollars are lost to online fraud world-wide?

Companies with online businesses have to constantly be on guard for fraudulent activity such as fake accounts and payments made with stolen credit cards.  One way they try to identify fraudsters is by using fraud detection apps, some of which use Machine Learning (ML).

Enter Amazon Fraud Detector! It uses your data, ML, and more than 20 years of fraud detection expertise from Amazon to automatically identify potentially fraudulent online activity so you can catch more fraud faster. You can create a fraud detection model with just a few clicks and no prior ML experience because Fraud Detector handles all of the ML heavy lifting for you.

How it works..

“But how does it work?” you ask. 🤷🏻‍♀️

I’m so glad you asked! Let’s summarize this into 5 main steps. 👩🏻‍💻

  • Step 1: Define the event you want to assess for fraud.
  • Step 2: Upload your historical event dataset to Amazon S3 and select a fraud detection model type.
  • Step 3: Amazon Fraud Detector uses your historical data as input to build a custom model. The service automatically inspects and enriches data, performs feature engineering, selects algorithms, trains and tunes your model, and hosts the model.
  • Step 4: Create rules to either accept, review, or collect more information based on model predictions.
  • Step 5: Calls the Amazon Fraud Detector API from your online application to receive real-time fraud predictions and take action based on your configured detection rules. (Example: an ecommerce application can send an email and IP address and receive a fraud score as well as the output from your rule (e.g., review))

Let’s see a demo…

Let’s have a demo to better understand how it all fits together. In today’s post, we will walk you through two main components: Building an Amazon Fraud Detector model and Generating real-time fraud predictions.

Part A: Building an Amazon Fraud Detector model

We begin by uploading fictitious generated training data to an S3 bucket. In fact, our user guide has a sample data set that we can use. Once we have downloaded that CSV file, we need to put this training data into an S3 bucket.

For context, let’s also go ahead open that CSV file and see what’s inside…

👉🏾NOTE: With Amazon Fraud Detector, you’re able to choose a minimum of 2 variables to train a model, not just the email and IP address. (In fact, the model supports up to 100 inputs!)

We continue by defining (creating) an event. An event is essentially a set of attributes about a particular event. We define the structure of the event we want to evaluate for fraud. (Amazon Fraud Detector evaluates ‘events’ for fraud.)

Let’s create a New Entity. This entity represents the person or thing that is triggering the event.

event_details

create_entity

We move on to Event Variables. We will select variables from a training dataset. This will allow us to use the earlier mentioned CSV file and pull in the headers.

For the IAM role section, we create a new one. I am going to use the same name as my bucket I just created, ‘fraud-detector-training-data’.

And now we can upload the earlier mentioned CSV file to pull in the headers.


Because we are going to define a model, we must define at least two labels.

Let’s finalize creating our event!

If all goes well, we get a happy green bar that alerts us to the fact that our event was successfully created!

event_detail_page

Now it’s time to create our Model.

 

Let’s take a moment to Define model details. We make sure to select our previously created event type.

create_model_step_1

We move on to Configure training and make sure to select the labels under Fraud and Legitimate labels. (This allows us to separate our classifications so that the model can learn to distinguish between these two labels.)

Models take about 30-40 minutes up to a couple hours depending on the dataset size. This example dataset takes around 40 minutes to train the model.

For the purpose of this blog post, let’s pretend we’ve already skipped ahead 40 minutes in time to a training model that is complete. 🙌🏾

model_detail_page

You can also check out your model’s performance metrics!

model_performance

We can now proceed to deploy our Model.

deploy_model_1

A pop-up model asks us to confirm if this is the version we wish to Deploy.

deploy_model_confirmation

 

 

Part B: Generate real-time fraud predictions

It’s time to generate real-time fraud predictions! Ready?

At this point you have a deployed model that you’re happy with and want to use to get predictions.

We must build a Detector, which is a container for your models and rules. It’s your detection logic that you want to apply to evaluate the event.

We go on to define the Detector details.

We also make sure to select our previously created Event.

detector_wizard_step_1

Now we select a Model.

add_model_to_detector

We move on to establish some threshold rules.

The rules interpret the output of the Model. They also determine the output of the Detector.

high_fraud_risk_rule

Let’s do two more rules.

Besides a high_fraud_risk label, we also want to add low_fraud_risk and medium_fraud_risk labels.

low_fraud_risk_rule

medium_fraud_risk_rule

Remember that these rule threshold values are examples only. When creating rules for your own detector, you should use values that are appropriate based on your model, data and business.

Now in our example for this post, these particular threshold rules are never going to match at the same time.

three_rules_created

This means that either Rule Execution modes are fine to use in our current example.

Yay! We’ve created our Detector.

detector_created_banner

Now let’s click on the Rules tab.

detector_rules_tab

We can also check out what models we have under the Models tab.

detector_models_tab

If we go back to the Overview tab, we can even run a quick test! We can run tests to sample the output from our Detector. 

run_test

Once we’re ready, we can publish this version of the detector to make it the Active version. Each detector can have one Active version at a time.

publish_detector

A pop-up modal asks us to confirm if we’re ready to publish this version.

The next step is to run real time predictions! Let’s show a sample one-off prediction with an Amazon SageMaker notebook and see what that looks like.

We move to the Amazon SageMaker console, and go to Notebook instances.

In this case you can see I already have a Jupyter Notebook ready to go.

We’re going to run the get_event_prediction block. This is our main runtime API and customers can call it using a script to run a batch of sample predictions. Alternatively, customers can also integrate this API into their applications to generate real-time predictions, and adjust user experiences dynamically based on risk.

After running this block, here are the model score results we receive.

We had 1 model in this Detector and it returned a score of 933. According to the rules we created, this means we consider this transaction to return as a high_fraud_risk.

get_prediction

Let’s head back to the Amazon Fraud Detector console and check out the Rules in our Detector.

We can see from the Rules of our Detector that if the risk score is over 900, the Outcome should be verify_customer.

This completes the loop!

We now have confirmation that you can call this Detector in real time and get your Fraud Predictions.

🌎 Lastly…
Amazon Fraud Detector is now globally available to our customers and is integrated with many AWS services such as Amazon CloudWatch, AWS CloudTrail, AWS PrivateLink, etc.

To learn more about Amazon Fraud Detector, visit the website and the developer guide.

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️🤖  y Canela 🐾

Via AWS News Blog https://ift.tt/1EusYcK