Friday, August 28, 2020

How to find cloud talent during a boom

We made the call from the start: COVID-19 would spike the need for cloud computing and cloud computing talent. IDC’s latest Worldwide Quarterly Cloud IT Infrastructure Tracker noted an increase in cloud spending with traditional infrastructure taking a dirt nap. The pandemic was the primary driver behind the shift in IT spending, with IDC noting that widespread remote work triggered demand for enterprise cloud-based services.

Of course, that was last quarter. The outlook for the remainder of the year is explosive cloud growth with funding and acceleration of cloud projects underway right now or about to begin. What currently hinders an enterprise’s movement to the cloud is the lack of cloud talent, including architects, security specialists, developers, operations, and secops engineers, to name just a few. 

To read this article in full, please click here

Thursday, August 27, 2020

Announcing a second Local Zone in Los Angeles

In December 2019, Jeff Barr published this post announcing the launch of a new Local Zone in Los Angeles, California. A Local Zone extends existing AWS regions closer to end-users, providing single-digit millisecond latency to a subset of AWS services in the zone. Local Zones are attached to parent regions – in this case US West (Oregon) – and access to services and resources is performed through the parent region’s endpoints. This makes Local Zones transparent to your applications and end-users. Applications running in a Local Zone have access to all AWS services, not just the subset in the zone, via Amazon’s redundant and very high bandwidth private network backbone to the parent region.

At the end of the post, Jeff wrote (and I quote) – “In the fullness of time (as Andy Jassy often says), there could very well be more than one Local Zone in any given geographic area. In 2020, we will open a second one in Los Angeles (us-west-2-lax-1b), and are giving consideration to other locations.”

Well – that time has come! I’m pleased to announce that following customer requests, AWS today launched a second Local Zone in Los Angeles to further help customers in the area (and Southern California generally) to achieve even greater high-availability and fault tolerance for their applications, in conjunction with very low latency. These customers have workloads that require very low latency, such as artist workstations, local rendering, gaming, financial transaction processing, and more.

The new Los Angeles Local Zone contains a subset of services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon FSx for Windows File Server and Amazon FSx for Lustre, Elastic Load Balancing, Amazon Relational Database Service (RDS), and Amazon Virtual Private Cloud, and remember, applications running in the zone can access all AWS services and other resources through the zone’s association with the parent region.

Enabling Local Zones
Once you opt-in to use one or more Local Zones in your account, they appear as additional Availability Zones for you to use when deploying your applications and resources. The original zone, launched last December, was us-west-2-lax-1a. The additional zone, available to all customers now, is us-west-2-lax-1b. Local Zones can be enabled using the new Settings section of the EC2 Management Console, as shown below

As noted in Jeff’s original post, you can also opt-in (or out) of access to Local Zones with the AWS Command Line Interface (CLI) (aws ec2 modify-availability-zone-group command), AWS Tools for PowerShell (Edit-EC2AvailabilityZoneGroup cmdlet), or by calling the ModifyAvailabilityZoneGroup API from one of the AWS SDKs.

Using Local Zones for Highly-Available, Low-Latency Applications
Local Zones can be used to further improve high availability for applications, as well as ultra-low latency. One scenario is for enterprise migrations using a hybrid architecture. These enterprises have workloads that currently run in existing on-premises data centers in the Los Angeles metro area and it can be daunting to migrate these application portfolios, many of them interdependent, to the cloud. By utilizing AWS Direct Connect in conjunction with a Local Zone, these customers can establish a hybrid environment that provides ultra-low latency communication between applications running in the Los Angeles Local Zone and the on-premises installations without needing a potentially expensive revamp of their architecture. As time progresses, the on-premises applications can be migrated to the cloud in an incremental fashion, simplifying the overall migration process. The diagram below illustrates this type of enterprise hybrid architecture.

Anecdotally, we have heard from customers using this type of hybrid architecture, combining Local Zones with AWS Direct Connect, that they are achieving sub-1.5ms latency communication between the applications hosted in the Local Zone and those in the on-premises data center in the LA metro area.

Virtual desktops for rendering and animation workloads is another scenario for Local Zones. For these workloads, latency is critical and the addition of a second Local Zone for Los Angeles gives additional failover capability, without sacrificing latency, should the need arise.

As always, the teams are listening to your feedback – thank you! – and are working on adding Local Zones in other locations, along with the availability of additional services, including Amazon ECS Amazon Elastic Kubernetes Service, Amazon ElastiCache, Amazon Elasticsearch Service, and Amazon Managed Streaming for Apache Kafka. If you’re an AWS customer based in Los Angeles or Southern California, with a requirement for even greater high-availability and very low latency for your workloads, then we invite you to check out the new Local Zone. More details and pricing information can be found at Local Zone.

— Steve Via AWS News Blog https://ift.tt/1EusYcK

Seamlessly Join a Linux Instance to AWS Directory Service for Microsoft Active Directory

Many customers I speak to use Active Directory to manage centralized user authentication and authorization for a variety of applications and services. For these customers, Active Directory is a critical piece of their IT Jigsaws.

At AWS, we offer the AWS Directory Service for Microsoft Active Directory that provides our customers with a highly available and resilient Active Directory service that is built on actual Microsoft Active Directory. AWS manages the infrastructure required to run Active Directory and handles all of the patching and software updates needed. It’s fully managed, so for example, if a domain controller fails, our monitoring will automatically detect and replace that failed controller.

Manually connecting a machine to Active Directory is a thankless task; you have to connect to the computer, make a series of manual changes, and then perform a reboot. While none of this is particularly challenging, it does take time, and if you have several machines that you want to onboard, then this task quickly becomes a time sink.

Today the team is unveiling a new feature which will enable a Linux EC2 instance, as it is launched, to connect to AWS Directory Service for Microsoft Active Directory seamlessly. This complements the existing feature that allows Windows EC2 instances to seamlessly domain join as they are launched. This capability will enable customers to move faster and improves the experience for Administrators.

Now you can have both your Windows and Linux EC2 instances seamlessly connect to AWS Directory Service for Microsoft Active Directory. The directory can be in your own account or shared with you from another account, the only caveat being that both the instance and the directory must be in the same region.

To show you how the process works, let’s take an existing AWS Directory Service for Microsoft Active Directory and work through the steps required to have a Linux EC2 instance seamlessly join that directory.

Create and Store AD Credentials
To seamlessly join a Linux machine to my AWS Managed Active Directory Domain, I will need an account that has permissions to join instances into the domain. While members of the AWS Delegated Administrators have sufficient privileges to join machines to the domain, I have created a service account that has the minimum privileges required. Our documentation explains how you go about creating this sort of service account.

The seamless domain join feature needs to know the credentials of my active directory service account. To achieve this, I need to create a secret using AWS Secrets Manager with specifically named secret keys, which the seamless domain feature will use to join instances to the directory.

In the AWS Secrets Manager console I click on the Store a new secret button, on the next screen, when asked to Select a secret type, I choose the option named Other type of secrets. I can now add two secret key/values. The first is called awsSeamlessDomainUsername, and in the value textbox, I enter the username for my Active Directory service account. The Second key is called awsSeamlessDomainPassword, and here I enter the password for my service account.

Since this is a demo, I chose to use the DefaultEncryptionKey for the secret, but you might decide to use your own key.

After clicking next, I am asked to give the secret a name. I add the following name, replacing d-xxxxxxxxx with my directory ID.

aws/directory-services/d-xxxxxxxxx/seamless-domain-join

The domain join will fail if you mistype this name or if you have any leading or ending spaces.

I take note down the Secret ARN as I will need it when I create my IAM Policy.

Create The Required IAM Policy and Role
Now I need to create an IAM policy that gives permission to read my seamless-domain-join secret.

I sign in to the IAM console and choose Policies. In the content pane, I select Create policy. I switch over to the JSON tab and copy the text from the following JSON policy document, replacing the Secrets Manager ARN with the one I noted down earlier.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret"
            ],
            "Resource": [
                "arn:aws:secretsmanager:us-east-1:############:secret:aws/directory-service/d-xxxxxxxxxx/seamless-domain-join-example"
            ]
        }
    ]
}

On the Review page, I name the policy SeamlessDomainJoin-Secret-Readonly then choose Create policy to save my work.

Now I need to create an IAM Role that will use this policy (and a few others). In the IAM Console, I choose Roles, and then in the content pane, choose to Create role. Under Select type of trusted entity, I select AWS service and then select EC2 as a use case and click Next:Permissions.


I attach the following policies to my Role: AmazonSSMManagedInstanceCore, AmazonSSMDirectoryServiceAccess, and SeamlessDomainJoin-Secret-Readonly.

I click through to the Review screen where it asks for a Role name, I call the role EC2DomainJoin, but it could be called whatever you like. I then create the role by pressing the button at the bottom right of the screen.

Create an Amazon Machine Image
When I launch a Linux Instance later I will need to pick a Linux Amazon Machine Image (AMI) as a template. Currently, the default Linux AMIs do not contain the version of AWS Systems Manager agent (SSM agent) that this new seamless domain feature needs. Therefore I am going to have to create an AMI with an updated SSM agent. To do this, I first create a new Linux Instance in my account and then connect to it using my SSH client. I then follow the documentation to update the SSM agent to 2.3.1644.0 or newer. Once the instance has finished updating I am then able to create a new AMI based on this instance using the following documentation.

I now have a new AMI which I can use in the next step. In the future, the base AMIs will be updated to use the newer SSM agent, and then we can skip this section. If you are interested to know what version of the SSM agent an instance is using this documentation explains how you can check.

Seamless Join
To start, I need to create a Linux instance, and so I head over to the EC2 console and choose Launch Instance.

Next, I pick a Linux Amazon Machine Image (AMI). I select the AMI which I created earlier.

When configuring the instance, I am careful to choose the Amazon Virtual Private Cloud that contains my directory. Using the drop-down labeled Domain join directory I am able to select the directory that I want this instance to join.

In the IAM role, I select the EC2DomainJoin role that I created earlier.

When I launch this instance, it will seamlessly join my directory. Once the instance comes online, I can confirm everything is working correctly by using SSH to connect to the instance using the administrator credentials of my AWS Directory Service for Microsoft Active Directory.

This new feature is available from today, and we look forward to hearing your feedback about this new capability.

Happy Joining

— Martin Via AWS News Blog https://ift.tt/1EusYcK

Log your VPC DNS queries with Route 53 Resolver Query Logs

The Amazon Route 53 team has just launched a new feature called Route 53 Resolver Query Logs, which will let you log all DNS queries made by resources within your Amazon Virtual Private Cloud. Whether it’s an Amazon Elastic Compute Cloud (EC2) instance, an AWS Lambda function, or a container, if it lives in your Virtual Private Cloud and makes a DNS query, then this feature will log it; you are then able to explore and better understand how your applications are operating.

Our customers explained to us that DNS query logs were important to them. Some wanted the logs so that they could be compliant with regulations, others wished to monitor DNS querying behavior, so they could spot security threats. Others simply wanted to troubleshoot application issues that were related to DNS. The team listened to our customers and have developed what I have found to be an elegant and easy to use solution.

From knowing very little about the Route 53 Resolver, I was able to configure query logging and have it working with barely a second glance at the documentation; which I assure you is a testament to the intuitiveness of the feature rather than me having any significant experience with Route 53 or DNS query logging.

You can choose to have the DNS query logs sent to one of three AWS services: Amazon CloudWatch Logs, Amazon Simple Storage Service (S3), and Amazon Kinesis Data Firehose. The target service you choose will depend mainly on what you want to do with the data. If you have compliance mandates (For example, Australia’s Information Security Registered Assessors Program), then maybe storing the logs in Amazon Simple Storage Service (S3) is a good option. If you have plans to monitor and analyze DNS queries in real-time or you integrate your logs with a 3rd party data analysis tool like Kibana or a SEIM tool like Splunk, than perhaps Amazon Kinesis Data Firehose is the option for you. For those of you who want an easy way to search, query, monitor metrics, or raise alarms, then Amazon CloudWatch Logs is a great choice, and this is what I will show in the following demo.

Over in the Route 53 Console, near the Resolver menu section, I see a new item called Query logging. Clicking on this takes me to a screen where I can configure the logging.

The dashboard shows the current configurations that are setup. I click Configure query logging to get started.

The console asks me to fill out some necessary information, such as a friendly name; I’ve named mine demoNewsBlog.

I am now prompted to select the destination where I would like my logs to be sent. I choose the CloudWatch Logs log group and select the option to Create log group. I give my new log group the name /aws/route/demothebeebsnet.

Next, I need to select what VPC I would like to log queries for. Any resource that sits inside the VPCs I choose here will have their DNS queries logged. You are also able to add tags to this configuration. I am in the habit of tagging anything that I use as part of a demo with the tag demo. This is so I can easily distinguish between demo resources and live resources in my account.

Finally, I press the Configure query logging button, and the configuration is saved. Within a few moments, the service has successfully enabled the query logging in my VPC.

After a few minutes, I log into the Amazon CloudWatch Logs console and can see that the logs have started to appear.

As you can see below, I was quickly able to start searching my logs and running queries using Amazon CloudWatch Logs Insights.

There is a lot you can do with the Amazon CloudWatch Logs service, for example, I could use CloudWatch Metric Filters to automatically generate metrics or even create dashboards. While putting this demo together, I also discovered a feature inside of Amazon CloudWatch Logs called Contributor Insights that enables you to analyze log data and create time series that display top talkers. Very quickly, I was able to produce this graph, which lists out the most common DNS queries over time.
Route 53 Resolver Query Logs is available in all AWS Commercial Regions that support Route 53 Resolver Endpoints, and you can get started using either the API or the AWS Console. You do not pay for the Route 53 Resolver Query Logs, but you will pay for handling the logs in the destination service that you choose. So, for example, if you decided to use Amazon Kinesis Data Firehose, then you will incur the regular charges for handling logs with the Amazon Kinesis Data Firehose service.

Happy Logging

— Martin Via AWS News Blog https://ift.tt/1EusYcK

Tuesday, August 25, 2020

Cloud migration gets harder

The chart below depicts the number of applications migrated over time in blue, and the degree of difficulty of moving those applications in orange. This is a fictional collection of applications; however, the concept that difficulty increases the more you migrate affects enterprises large and small as they move to the public cloud.  

See this chart: 

cloud migration difficulty IDG

What’s occurring is easy to explain, but the solution to the problem is not. 

To read this article in full, please click here

Monday, August 24, 2020

New EBS Volume Type (io2) – 100x Higher Durability and 10x More IOPS/GiB

We launched EBS Volumes with Provisioned IOPS way back in 2012. These volumes are a great fit for your most I/O-hungry and latency-sensitive applications because you can dial in the level of performance that you need, and then (with the launch of Elastic Volumes in 2017) change it later.

Over the years, we have increased the ratio of IOPS per gibibyte (GiB) of SSD-backed storage several times, most recently in August 2016. This ratio started out at 10 IOPS per GiB, and has grown steadily to 50 IOPS per GiB. In other words, the bigger the EBS volume, the more IOPS it can be provisioned to deliver, with a per-volume upper bound of 64,000 IOPS. This change in ratios has reduced storage costs by a factor of 5 for throughput-centric workloads.

Also, based on your requests and your insatiable desire for more performance, we have raised the maximum number of IOPS per EBS volume multiple times:

The August, 2014 change in the I/O request size made EBS 16x more cost-effective for throughput-centric workloads.

Bringing the various numbers together, you can think of Provisioned IOPS volumes as being defined by capacity, IOPS, and the ratio of IOPS per GiB. You should also think about durability, which is expressed in percentage terms. For example, io1 volumes are designed to deliver 99.9% durability, which is 20x more reliable than typical commodity disk drives.

Higher Durability & More IOPS
Today we are launching the io2 volume type, with two important benefits, at the same price as the existing io1 volumes:

Higher Durability – The io2 volumes are designed to deliver 99.999% durability, making them 2000x more reliable than a commodity disk drive, further reducing the possibility of a storage volume failure and helping to improve the availability of your application. By the way, in the past we expressed durability in terms of an Annual Failure Rate, or AFR. The new, percentage-based model is consistent with our other storage offerings, and also communicates expectations for success, rather than for failure.

More IOPS – We are increasing the IOPS per GiB ratio yet again, this time to 500 IOPS per GiB. You can get higher performance from your EBS volumes, and you can reduce or outright eliminate any over-provisioning that you might have done in the past to achieve the desired level of performance.

Taken together, these benefits make io2 volumes a perfect fit for your high-performance, business-critical databases and workloads. This includes SAP HANA, Microsoft SQL Server, and IBM DB2.

You can create new io2 volumes and you can easily change the type of an existing volume to io2:

Or:

$aws ec2 modify-volume --volume-id vol-0b3c663aeca5aabb7 --volume-type io2

io2 volumes support all features of io1 volumes with the exception of Multi-Attach, which is on the roadmap.

Available Now
You can make use of io2 volumes in the US East (Ohio), US East (N. Virginia), US West (N. California), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Middle East (Bahrain) Regions today.

Jeff;

 

Via AWS News Blog https://ift.tt/1EusYcK

Friday, August 21, 2020

2021: The year of the great enterprise disconnect

Due to the unprecedented shutdown of businesses during the pandemic, most white-collar workers went from hours-long commutes to working fulltime from a guest bedroom somewhere in the suburbs. In the past, most enterprises resisted or refused to consider viable telecommuting options. Suddenly, new regulations, laws, and liability policies forced the hands of most enterprises around the world. Companies had to accommodate a remote workforce in a short amount of time. Adapt or die. 

IT departments spent the better part of the year putting out fires caused by the sudden shift to a remote workforce. VPNs had to scale to handle the influx, networks needed upgrades, security required adjustments, and there was even work to be done with employees' ISPs to provide better and more reliable bandwidth. ISPs suddenly moved bandwidth upgrades to the top of the priority list. 

To read this article in full, please click here

Thursday, August 20, 2020

Announcing the newest AWS Heroes – August 2020

The AWS Heroes program recognizes a select few individuals who go above and beyond to share AWS knowledge and teach others about AWS, all while helping make building AWS skills accessible to many. These leaders have an incredible impact within technical communities worldwide and their efforts are greatly appreciated.

Today we are excited to introduce to you the latest AWS Heroes, including the first Heroes from Greece and Portugal:

Angela Timofte – Copenhagen, Denmark

Serverless Hero Angela Timofte is a Data Platform Manager at Trustpilot. Passionate about knowledge sharing, coaching, and public speaking, she is committed to leading by example and empowering people to seek and develop the skills they need to achieve their goals. She is driven to build scalable solutions with the latest technologies while migrating from monolithic solutions using serverless applications and event-driven architecture. She is a co-organizer of the Copenhagen AWS User Group and frequent speaker about serverless technologies at AWS Summits, AWS Community Days, ServerlessDays, and more.

Avi Keinan – Tel Aviv, Israel

Community Hero Avi Keinan is a Senior Cloud Engineer at DoIT International. He specializes in AWS Infrastructure, serverless, and security solutions, enjoys solving complex problems, and is passionate about helping others. Avi is a member of many online AWS forums and groups where he assists both novice and experienced cloud users with solving complex and simple solutions. He also frequently posts AWS-related videos on his YouTube channel.

Hirokazu Hatano – Tokyo, Japan

Community Hero Hirokazu Hatano is the founder & CEO of Operation Lab, LLC. He created the Japan AWS User Group (JAWS-UG) CLI chapter in 2014 and has hosted 165 AWS CLI events over a six-year period, with 4,133 attendees. In 2020, despite the challenging circumstances of COVID-19, he has successfully transformed the AWS CLI events from in-person to virtual. In the first quarter of 2020, he held 12 online hands-on events with 1,170 attendees. In addition, he has organized six other JAWS-UG chapters, making him one of the most frequent JAWS-UG organizers.

Ian McKay – Sydney, Australia

Community Hero Ian McKay is the Cloud Lead at Kablamo. Between helping clients, he loves the chance to build open-source projects with a focus on AWS automation and tooling. Ian built and maintains both the “Former2” and “Console Recorder for AWS” projects which help developers author Infrastructure as Code templates (such as CloudFormation) from existing resources or while in the AWS Management Console. Ian also enjoys speaking at meetups, co-hosting podcasts, engaging with the community on Slack, and posting his latest experiences with AWS on his blog.

Jacopo Nardiello – Milan, Italy

Container Hero Jacopo Nardiello is CEO and Founder at SIGHUP. He regularly speaks at conferences and events, and closely supports the local Italian community where he currently runs the Kubernetes and Cloud Native Milano meetup and regularly collaborates with AWS and the AWS Milan User Group on all things containers. He is passionate about developing and delivering battle-tested solutions based on EKS, Fargate, and OSS software – both from the CNCF and AWS – like AWS Firecracker (the base for Lambda), AWS Elasticsearch Open Distro, and other projects.

Jérémie Rodon – Paris, France

Community Hero Jérémie Rodon is a 9x certified AWS Cloud Architect working for Devoteam Revolve. He has 3 years of consulting experience, designing solutions in various environments from small businesses to CAC40 companies, while also empowering clients on AWS by delivering AWS courses as an AWS Authorized Instructor Champion. His interests lie mainly in security, and especially cryptography: he has done presentations explaining how AWS KMS works under the hood and has written a blog post explaining why the quantum crypto-calypse will not happen.

Kittaya Niennattrakul – Bangkok, Thailand

Community Hero Kittaya Niennattrakul (Tak) is a Product Manager and Assistant Managing Director at Dailitech. Tak has helped run the AWS User Group Thailand since 2014, and is also one of the AWS Asian Women’s Association leaders. In 2019, she and the Thailand User Group team, with support from AWS, organized two big events: AWS Meetup – Career Day (400+ attendees), and AWS Community Day in Bangkok (200+ attendees). Currently, Tak helps write a blog for sharing useful information with the AWS community. In 2020, AWS User Group Thailand has more than 8,000+ members and continues growing.

Konstantinos Siaterlis – Athens, Greece

Machine Learning Hero Konstantinos Siaterlis is Head of Data Engineering at Orfium, a Music Rights Management company based in Malibu. During his day-to-day, he drives adoption/integration and training on several AWS services, including Amazon SageMaker. He is the co-organizer of AWS User Group Greece, and a blogger on TheLastDev, writing about Data Science and introductory tutorials on AWS Machine Learning Services.

Kyle Bai – Taichung City, Taiwan

Container Hero Kyle Bai, also known as Kai-Ren Bai, is a Site Reliability Engineer at MaiCoin. He is a creator and contributor of a few open-source projects on GitHub about AWS, Containers, and Kubernetes. Kyle is a co-organizer of the Cloud Native Taiwan User Group. He is also a technical speaker, with experience delivering presentations at meetups and conferences including AWS re:Invent re:Cap Taipei, AWS Summit Taipei, AWS Developer Year-end Meetup Taipei, AWS Taiwan User Group, and so on.

Linghui Gong – Shanghai, China

Community Hero Linghui Gong is VP of Engineering at Strikingly, Inc. One of the early AWS users in China, Linghui has been leading his team to drive all critical architecture evolutions on AWS, including building a cross-region website cluster that serves millions of sites worldwide. Linghui shared the story of his teams’ cross-region website cluster on AWS at re:Invent 2018. He has also presented AWS best practices twice at AWS Summit Beijing, as well as at many other AWS events.

Luca Bianchi – Milan, Italy

Serverless Hero Luca Bianchi is Chief Technology Officer at Neosperience. He writes on Medium about serverless and Machine Learning, where he focuses on technologies such as AWS CDK, AWS Lambda, and Amazon SageMaker. He is co-founder of Serverless Meetup Italy and co-organizer of ServerlessDays Milano and ServerlessDays Italy. As an active member of the serverless community, Luca is a regular speaker at user groups, helping developers and teams to adopt cloud technologies.

Matthieu Napoli – Lyon, France

Serverless Hero Matthieu Napoli is a software engineer passionate about helping developers to create. Fascinated by how serverless unlocks creativity, he works on making serverless accessible to everyone. Matthieu enjoys maintaining open-source projects including Bref, a framework for creating serverless PHP applications on AWS. Alongside Bref, he sends a monthly newsletter containing serverless news relevant to PHP developers. Matthieu recently created the Serverless Visually Explained course which is packed with use cases, visual explanations, and code samples.

Mike Chambers – Brisbane, Australia

Machine Learning Hero Mike Chambers is an independent trainer, specializing in AWS and machine learning. He was one of the first in the world to become AWS certified and has gone on to train well over a quarter of a million students in the art of AWS, machine learning, and cloud. An active advocate of AWS’s machine learning capabilities, Mike has traveled the world helping study groups, delivering talks at AWS Meetups, all while posting widely on social media. Mike’s passion is to help the community get as excited about technology as he is.

Noah Gift – Raleigh-Durham, USA

Machine Learning Hero Noah Gift is the founder of Pragmatic A.I. Labs. He lectures on cloud computing at top universities globally, including the Duke and Northwestern graduate data science programs. He designs graduate machine learning, MLOps, A.I., and Data Science courses, consults on Machine Learning and Cloud Architecture for AWS, and is a massive advocate of AWS Machine Learning and putting machine learning models into production. Noah has authored several books, including Pragmatic AI, Python for DevOps, and Cloud Computing for Data Analysis.

Olalekan Elesin – Berlin, Germany

Machine Learning Hero Olalekan Elesin is an engineer at heart, with a proven record of leading successful machine learning projects as a data science engineer and a product manager, including using Amazon SageMaker to deliver an AI enabled platform for Scout24, which reduced productionizing ML projects from at least 7 weeks to 3 weeks. He has given several talks across Germany on building machine learning products on AWS, including Serverless Product Recommendations with Amazon Rekognition. For his machine learning blog posts, he writes about automating machine learning workflows with Amazon SageMaker and Amazon Step Functions.

Peter Hanssens – Sydney, Australia

Serverless Hero Peter Hanssens is a community leader: he has led the Sydney Serverless community for the past 3 years, and also built out Data Engineering communities in Melbourne, Sydney, and Brisbane. His passion is helping others to learn and grow their careers through shared experiences. He ran the first ever ServerlessDays in Australia in 2019 and in 2020 he has organized AWS Serverless Community Day ANZ, ServerlessDays ANZ, and DataEngBytes, a community-built data engineering conference.

Ruofei Ma – Beijing, China

Container Hero Ruofei Ma works as a principal software engineer for FreeWheel Inc, where he focuses on developing cloud-native applications with AWS. He enjoys sharing technology with others and is a lecturer at time.geekbang.org, the biggest IT knowledge-sharing platform in China. He enjoys sharing his experience on various meetups, such as the CNCF webinar and GIAC. He is a committee member of the largest service mesh community in China, servicemesher.com, and often posts blogs to share his best practices with the community.

Sheen Brisals – London, United Kingdom

Serverless Hero Sheen Brisals is a Senior Engineering Manager at The LEGO Group and is actively involved in the global Serverless community. Sheen loves being part of Serverless conferences and enjoys sharing knowledge with community members. He talks about Serverless at many events around the world, and his insights into Serverless technology and adoption strategies can be found on his Medium channel. You can also find him tweeting about Serverless success stories and technical thoughts.

Sridevi Murugayen – Chennai, India

Data Hero Sridevi Murugayen has 16+ years of IT experience and is a passionate developer and problem solver. She is an active co-organizer of the AWS User Group in Chennai, and helps host and deliver AWS Community Days, Meetups, and technical sessions to the developer community. She is a regular speaker in community events focusing on analytics solutions using AWS Analytics services, including Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon S3, and AWS Glue. She strongly believes in diversity and inclusion for a successful society and loves encouraging and enabling women technologists.

Stéphane Maarek – Lisbon, Portugal

Data Hero Stéphane Maarek is an online instructor at Udemy with many courses designed to teach users how to build on AWS (including the AWS Certified Data & Analytics Specialty certification) and deepen their knowledge about AWS services. Stephane also has a keen interest in teaching about Apache Kafka and recently partnered with AWS to launch an in-depth course on Amazon MSK (Managed Streaming for Apache Kafka). Stephane is passionate about technology and how it can help improve lives and processes.

Tom McLaughlin – Boston, USA

Serverless Hero Tom McLaughlin is a cloud infrastructure and operations engineer who has worked in companies ranging from startups to the enterprise. What drew Tom early to serverless was the prospect of having no hosts or container management platform to build and manage which yielded the question; what would he do if the servers he was responsible for went away? He’s found enjoyment in a community of people that are both pushing the future of technology and trying to understand its effects on the future of people and businesses.

 

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Hero website.

Ross;

Via AWS News Blog https://ift.tt/1EusYcK

Tuesday, August 18, 2020

AWS announces AWS Contact Center Intelligence solutions

What was announced?

We’re announcing the availability of AWS Contact Center Intelligence (CCI) solutions, a combination of services that empowers customers to easily integrate AI into contact centers, made available through AWS Partner Network (APN) partners.

AWS CCI has solutions for self-service, live-call analytics & agent assist, and post-call analytics, making it possible for customers to quickly deploy AI into their existing workflows or build completely new ones.

Pricing and regional availability correspond to the underlying services (Amazon Comprehend, Amazon Kendra, Amazon Lex, Amazon Transcribe, Amazon Translate, and Amazon Polly) used.

What is AWS Contact Center Intelligence?

We mentioned that AWS CCI brings solutions to contact centers powered by AI for before, during, and after customer interactions.

My colleague Swami Sivasubramanian (VP, Amazon Machine Learning, AWS) said: “We want to make it easy for our customers with contact centers to benefit from machine learning capabilities even if they have no machine learning expertise. By partnering with APN technology and consulting partners to bring AWS Contact Center Intelligence solutions to market, we are making it easier for customers to realize the benefits of cloud-based machine learning services while removing the heavy lifting and the need to hire specialized developers to integrate the ML capabilities in to their existing contact centers.

But what does that mean? 🤔

AWS CCI solutions lets you leverage machine learning (ML) functionality such as text-to-speech, translation, enterprise search, chatbots, business intelligence, and language comprehension into current contact center environments. Customers can now implement contact center intelligence ML solutions to aid self-service, live-call analytics & agent assist, and post-call analytics. Currently, AWS CCI solutions are available through partners such as Genesys, Vonage, and UiPath for easy integration into existing enterprise contact center systems.

“We’re proud Genesys customers will be among the first to benefit from the off-the-shelf machine learning capabilities of AWS Contact Center Intelligence solutions. It’s now simpler and more cost-effective for organizations to combine AWS’s AI capabilities, including search, text-to-speech and natural language understanding, with the advanced contact center capabilities of Genesys Cloud to give customers outstanding self-service experiences.” ~ Olivier Jouve (Executive Vice President and General Manager of Genesys Cloud)

“More and more consumers are relying on automated methods to interact with brands, especially in today’s retail environment where online shopping is taking a front seat. The Genesys Cloud and Amazon Web Services (AWS) integration will make it easier to leverage conversational AI so we can provide more effective self-service experiences for our customers.” ~ Aarde Cosseboom (Senior Director of Global Member Services Technology, Analytics and Product at TechStyle Fashion Group)

 

How it works and who it’s for…

AWS Contact Center Intelligence solutions offer a variety of ways that organizations can quickly and cost-effectively add machine learning-based intelligence to their contact centers, via AWS pre-trained AI Services. AWS CCI is currently available through participating APN partners, and it is focused on three stages of the contact center workflow: Self-Service, Live Call Analytics and Agent Assist, and Post-Call Analytics. Let’s break each one of these up.

The Self-Service solution helps with creation of chatbots and ML-driven IVRs (Interactive voice response) to address the most common queries a contact center workforce often gets. This now allows actual call center employees to focus on higher value work. To implement this solution, you’ll want to work with either Amazon Lex and/or Amazon Kendra. The novelty of this solution is that Lex + Kendra not only fulfills transactional queries (i.e. book a hotel room or reset my password), but also addresses the long tail of customers questions whose answers live in enterprises knowledge systems. Before, these Q&A had to be hard coded in Lex, making it harder to implement and maintain. Today, you can implement this solution directly from your existing contact center platform with AWS CCI partners, such as Genesys.

The Live Call Analytics & Agent Assist solution enables the creation of real-time ML capabilities to increase staff productivity and engagement. Here, Amazon Transcribe is used to perform real-time speech transcription, while Amazon Comprehend can analyze interactions, detect the sentiment of the caller, and identify key words and phrases in the conversation. Amazon Translate can even be added to translate the conversation into a preferred language! Now, you can implement this solution directly from several leading contact center platforms with AWS CCI partners, like SuccessKPI.

The Post-Call Analytics solution is an automatic analysis of contact center conversations, which tend to leave actionable data for product and service feedback loops. Similar to live call analytics, this solution combines Amazon Transcribe to perform speech recognition and creates a high-quality text transcription of each call, with Amazon Comprehend to analyze the interaction. Amazon Translate can be added to translate the conversation into your preferred language, and Amazon Kendra can be used for contextual natural language queries. Today, you can implement this solution directly from several leading contact center platforms with AWS CCI partners, such as Acqueon.

AWS helps partners integrate these solutions into their products. Some solutions also have a Quick Start, which includes CloudFormation templates and deployment guide, to automate the deployments. The good news is that our AWS Partners landing pages will also provide additional implementation information specific to their products. 👌

Let’s see a demo…

For today’s post, we chose to focus on diving deeper into the Self-Service and Post-Call Analytics solutions, so let’s begin with Self-Service.

Self-Service
We have a public GitHub repository that has a complete Quick Start template plus a detailed deployment guide with architecture diagrams. (And the good news is that our APN partner landing pages will also reference this repo!)

This GitHub repo talks about the Amazon Lex chatbot integration with Amazon Kendra. The main idea here is that the customer can bring their own document repository through Amazon Kendra, which can be sourced through Amazon Lex when customers are interacting with this Lex chatbot.

The main thing we want to notice in this architecture is that customers can bring their existing documents and allow their chatbot to search that document whenever someone interacts with said chatbot. The architecture below assumes our docs are in an S3 bucket, but it’s worth noting that Amazon Kendra can integrate with multiple kinds of data sources. If using an S3 bucket, customers must provide their own S3 bucket name, the one that has their document repository. This is a prerequisite for deployment.

Let’s follow the instructions under the repo’s Deployment Steps, skipping ahead to Step #2, “Click Deploy to launch the CloudFormation template.”

Since this is a Quick Start template, you can see how everything is already filled out for us. We click Next and move on to Step 2, Specify stack details.

Notice how the S3 bucket section is blank. You can provide your own S3 bucket name if you want to test this out with your own docs. For today, I am going to use the S3 bucket name that was provided to us in the GitHub doc.

The next part to configure will be the Cross account role configuration section. For my demo, I will add my own AWS account ID under “Assuming Account ID.”

We click Next and move on to Step 3, Configure Stack options.

Nothing to configure here, so we can click Next again and move on to Step 4, Review. We click to accept these final acknowledgements and click Create Stack.

If we were to navigate over to our deployed AWS CloudFormation stacks, we can go to Outputs of this stack and see our Kendra index name and Lex bot name.

Now if we head over to Amazon Lex, we should be able to easily find our chatbot.

We click into it and we can see that our chatbot is ready. At this point, we can start interacting with it!

We can something like “Hi” for example.

Eventually we would also get a response that details the reply source. What this means is that it will tell you if this came from Amazon Lex or from Amazon Kendra and the documents we saved in our S3 bucket.

 

Live Call Analytics & Agent Assist
We have two public GitHub repositories for this solution too, and both have detailed deployment guide with architecture diagrams as well.

This GitHub repo provides us a code example and a fully functional AWS Lambda function to get you started with capturing and transcribing Amazon Chime Voice Connector phone calls using Amazon Kinesis Video Streams and Amazon Transcribe. This solution gives us the ability to see how to use AI and ML services to talk to the customer’s existent environment, to drive agent assistance or analytics. We can take a real-time voice feed, transcribe that information, and then use Amazon Comprehend to pull that information out to provide the key action and sentiment.

We now also provide the Chime SIP req connector (a chime component that allows you to connect voice over an IP compatible environment with Amazon voice services) to stream voice in Amazon Transcribe from virtually any contact center. Our partner Vonage can do the same through websocket.

👉🏽 Check out the GitHub developer docs:

And as we mentioned above, for today’s post, we chose to focus on diving deeper into the Self-Service and Post-Call Analytics solutions. So let’s move on to show an example for Post-Call Analytics.

 

Post-Call Analytics

We have a public GitHub repository for this solution too, with another complete Quick Start template and detailed deployment guide with architecture diagrams. This solution is used after the call has ended, so that our customers can review the analytics of those calls.

This GitHub repo talks about how to look for insights and information about calls that have already happened. We call this, Quality Management. We can use Amazon Transcribe and Amazon Comprehend to pull out key words, information, and data, in order to know how to better drive what is happening in our contact center calls. We can then review these insights on Amazon QuickSight.

Let’s look at the architecture diagram for this solution too. Our call recording gets stored in an S3 bucket, which is then picked up by a Lambda function which does a transcription using Amazon Transcribe. It puts the result in a different bucket and then that call’s metadata gets stored in DynamoDB. Now Amazon Comprehend can conduct text analysis on the call’s metadata, and stores the result in a Text analysis Output bucket. Eventually, QuickSight is used to provide dashboards showing the resulting call analytics.

Just like in the previous example, we move down to the Deployment steps section. Just like before, we have a pre-made CloudFormation template that is ready to be deployed.

Step 1, Specify template is good to go, so we click Next.

In Step 2, Specify stack details, something important to note is that the User Pool Domain Name must be globally unique.

We click Next and move on to Step 3, Configure Stack options. Nothing additional to configure here either, so we can click Next again and move on to Step 4, Review.

We click to accept these final acknowledgements and click Create Stack.

And if we were to navigate over to our deployed AWS CloudFormation stacks again, we can go to Outputs of this stack and see the PortalEndpoint key. After the stack creation has completed successfully, and portal website is available at CloudFront distribution endpoint. This key is what will allow us to find the portal URL.

We will need to have user created in Amazon Cognito for the next steps to work. (If you have never created one, visit this how-to guide.)

⚠ NOTE: Make sure to open the portal URL endpoint in a different Incognito Window as the portal attaches a QuickSight User Role that can interfere with your actual role.

We go to the portal URL and login with our created Cognito user. We’re prompted to change the temporary password and are eventually directed to the QuickSight homepage.

Now we want to upload the audio files of our calls and we can do so with the Upload button.

After successfully uploading our audio files, the audio processing will run through transcription and text analysis. At this point we can click on the Call Analytics logo in the top left of the Navigation Bar to return to home page.

Now we can drill down into a call to see Amazon Comprehend’s result of the call classifications and turn-by-turn sentiments.

 

🌎 Lastly…

Regional availability for AWS Contact Center Intelligence (CCI) solutions correspond to the underlying services (Amazon Comprehend, Amazon Kendra, Amazon Lex, Amazon Transcribe, Amazon Translate) used.

We are announcing AWS CCI availability with 12 APN partners: Genesys, UiPath, Vonage, Acqueon, SuccessKPI, and Inference Solutions (Technology partners), and Slalom, Onica/Rackspace, TensorIoT, Quantiphi, Accenture, and HGS Digital (Consulting partners).

Ready to get started? Contact one of the AWS CCI launch partners listed on the AWS CCI web page.

 

You may also want to see…

👉🏽AWS Quick Start links from post:

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️🤖 y Canela 🐾

Via AWS News Blog https://ift.tt/1EusYcK