Cloud Computing

What is Cloud Computing



Cloud computing is the delivery of on-demand
computing services


Applications to storage and processing power
Over the internet and on a pay-as-you-go basis.

Why “The Cloud”?


  • Avoid the upfront cost  of IT Infrastructure
  • Complexity of maintaining Infrastructure
  • Pay for what is used


IT Infrastructure is physical, in a lot of places on  a
network of servers designed to deliver exactly what
is needed.
Applications
Share photos to millions of mobile users
Support critical operations of your business,
Cloud services platform provides rapid access to IT resources.
Access
  • Servers
  • Storage
  • Databases
  • Set of application services


The Cloud used to refer to the internet run softwares
and services
Global network of servers
The computing cloud is made up of millions of computers
working together that it appears as one giant computer.
Benefits
  • Trade capital expense for variable expense
  • invest heavily in data centers and servers
  • pay only when you consume computing resources
  • pay only for how much you consume
Achieve a lower variable cost because usage from
 customers is aggregated.


AWS can achieve higher economies of scale meaning
lower pay as-you-go prices.
Stop guessing Infrastructure capacity
Too much buying more than you need.
Too little buying Infrastructure inadequate for your
business model.
Cloud - Scale up and down as required
Increase speed and agility – IT resources can be increased
or decreased very quickly. No hardware purchase or
implementation lead times.
Increase in agility for the organization
Focus on projects that differentiate your business,
not the implementing and maintaining of IT infrastructure.
Go global - Deploy your application in multiple regions
around the world.
Provide lower latency for customers.


Virtualization


Computers have a tremendous amount of processing power
Fast CPU (central processing units) speeds
RAM (random access memories)
Storage capacity
Computing power may not be used efficiently
Processing power are being underutilized
Virtualization refers to the creation of a virtual machine
that acts like a real computer with an operating system.
Software executed on these virtual machines is separated from
the underlying hardware resources.

Virtualization helps solve the problem of underutilized
resources by creating a virtualization layer between
the hardware components and the user.


Virtual computers that can run in multiple on a single set
of hardware.
The virtualization layer creates a virtual hardware components
for the virtual machine.
Benefits:
1. Reduced Hardware Costs
2. Faster Server Provisioning and Deployment
Increased availability with the ability to snapshot vms,
clone vms, and run redundant vms.


3. Greatly Improved Disaster Recovery
Move a virtual machine from one server to another quickly
and safely automate the failover during a disaster.
4. Significant Energy Cost Savings
Increased energy savings by using less computer hardware
and therefore less electricity.


5. Increased Productivity - fewer physical servers means there
are less of them to maintain and manage
Increased efficiency you can run multiple computers instead
of just a single computer


It also increases manageability or the ability to move, copy,
and isolate virtual machines.

Terminologies
CPU (central processing unit) - sends signals to control
the other parts of the computer.
RAM (random access memories) - the hardware in a
computing device where the operating system (OS),
application programs, and data in current use are kept
so they can be quickly reached by the device's processor.
OS (operating system) - the software that supports a
computer's basic functions, such as scheduling tasks,
executing applications, and controlling peripherals.



  • Types of Cloud Computing



Infrastructure - IaaS (Infrastructure as a service)
The lowest level is infrastructure-as-a-service (IaaS).
This is where pre-configured hardware is provided
via a virtualised interface.
Provides access to networking features,
computers, and data storage space.
Management control over  IT resources similar to
what is in traditional IT departments.


Platform - PaaS (Platform as a service)
The operating environment included the operating
system and application services.
Handles resource procurement, capacity planning,
software maintenance, patching, or any of the other
functions involved in running your application.


Software - AaaS (Software as a service)
SaaS are referring to end-user applications.
Focus on how to use software.
Example of a SaaS application is web-based email
where you can send and receive email.


A cloud services platform such as Amazon Web Services
owns and maintains the network-connected hardware
required for these application services, while you
provision and use what you need via a web application.


Cloud Computing Deployment Models
Cloud-based application is fully deployed in the cloud and
all parts of the application run in the cloud.
Applications in the cloud have either been created in the cloud
or have been migrated from an existing infrastructure to
take advantage of the benefits of cloud computing.
Hybrid Deployment
Connect infrastructure and applications between cloud-based
resources and existing resources that are not located in the
cloud.
Extend, and grow, an organization's infrastructure into the
cloud while connecting cloud resources to the internal system.
On-premises Deployment
Virtualization and resource management tools, is sometimes
called the “private cloud.”Provide dedicated resources the same
as legacy IT infrastructure while using application
management and virtualization technologies to try and
increase resource utilization.
Global Infrastructure active customers in more than 190
countries.
Achieve lower latency and higher throughput
Data resides only in the AWS Region they specify
An AWS Region is a physical location in the world
where Multiple Availability Zones.
Availability Zones consist of one or more discrete data
centers, each with redundant power, networking, and
connectivity, housed in separate facilities.
Operate production applications and databases that are more
highly available, fault tolerant, and scalable than would be
possible from a single data center.


Benefits of AWS Security
• Keep Your Data Safe:  AWS infrastructure puts strong
safeguards in place. All data is stored in highly secure AWS
data centers.
• Meet Compliance Requirements: AWS manages dozens of
compliance programs in its infrastructure. Segments of your
compliance have already been completed.
• Save Money: Cut costs.. Maintain the highest standard of
security without having to manage your own facility
• Scale Quickly: Security scales with your AWS Cloud usage.
Compliance AWS Cloud Compliance
robust controls in place at AWS to maintain security and data
protection in the cloud.
Compliance responsibilities will be shared. By tying together
governance-focused, audit-friendly service features with
applicable compliance or audit standards
Build on traditional programs. Designed and managed in
alignment with best security practices and a variety of IT
security standards.
• SOC 1/ISAE 3402, SOC 2, SOC 3 • FISMA, DIACAP, and
FedRAMP • PCI DSS Level 1
• ISO 9001, ISO 27001, ISO 27017, ISO 27018
What is Amazon Web Services?


Amazon Web Services (AWS) is a secure cloud services platform
, offering compute power, database storage, content delivery
and other functionality to help businesses scale and grow.


infrastructure services
computing power
storage options
networking and databases
delivered as a utility: on-demand,with pay-as-you-go pricing.
wide range of database engines
server configurations
Encryption
big data tools


Services are  quick to provision no  upfront capital


Respond quickly to changing business requirements.


Security in the cloud is recognized as better than on-premises.


Security modules and strong physical security all contribute
to a more secure way to manage your business.


Controlling, auditing and managing identity, configuration
and usage come built into the platform to meet your compliance
, governance and regulatory requirements.
The benefits of Cloud Computing with AWS


Switching to Cloud Computing offers numerous benefits
to you and your company. AWS offers benefits such as:
Capital Expense vs Variable Expense
Traditionally you pay upfront for you to be able
to use servers.
Cloud  you can only pay when you consume
computing resources, and only pay for how much
you consume.

Benefiting from Economies of Scale
Lower variable cost compared to on your own
bundled with other people using Amazon Web
Services, they achieve higher economies of scale.

Right Capacity
How much capacity your application will need?
No limits on capacity
No buying excess
Adjust as needed.

Increased Speed / Agility
Resources readily available  
Reduce the speed and cost for development

Stop spending money on  data centers
Focus on your business rather than cost
of  of racking, stacking and powering servers.

Go global in minutes
Deploy your application in multiple regions
around the world. Provide a lower latency



Why should you move to The Cloud?



Flexibility

Fluctuating bandwidth demands. scale up your
cloud capacity, scale down again, the flexibility
is incorporated into the service.




Disaster Recovery
Small businesses cannot afford.
Implement cloud-based backup and recovery
solutions save time and avoid large upfront
investment.


Auto Update
Servers are online, off-premise
Suppliers roll out regular software updates –
including security updates
No need to spend time maintaining the system




Cloud Expenditure Free
Cloud computing cuts out the cost of hardware.
Increased Collaboration
Teams can access, edit and share documents.
Cloud-based workflow and file sharing apps help
them make updates in real time and gives them full
visibility of their collaborations.



Work Anywhere
Cloud computing, if you’ve got an internet connection,
you can be at work since the cloud exist on the web.
Cloud services offering mobile apps, you’re not
restricted by which device you’ve got to hand.



Controlling your Documents
Cloud Computing, all files are stored centrally and
everyone sees one version. Greater visibility means
improved collaboration.



Security
Lost laptops are a billion dollar business problem.
Loss of the sensitive data inside it.
Data is stored in the cloud, you can access it no matter
what happens to your machine
You can remotely wipe data from lost laptops so it
doesn’t get into the wrong hands.



Competitiveness
Moving to the cloud gives access to enterprise-class
technology.
It also allows businesses to act faster than established
competitors.
Pay-as-you-go service and cloud business applications
mean small outfits can run same as larger firms and
disrupt the market.



Environmentally Friendly
When your cloud needs fluctuate, your server capacity
scales up and down to fit. So you only use the energy
you need and you don’t leave oversized carbon
footprints.


Cost benefits of using AWS


Pay as you go


No minimum commitments or long-term contracts
required.
No capital expense just variable cost and pay
for what you use.
Compute resources pay on an hourly basis when
you launch a resource until the time you terminate it.
Data storage and transfer, you pay on a
per gigabyte basis,  based on the underlying
infrastructure and services that consumed.
Turn off your cloud resources and stop paying
for them when you don’t need them.


Pay less when you reserve


For certain products, you can invest in reserved
capacity.
Pay upfront fee and get discounted hourly rate


Pay even less per unit by using more


You save more as you grow bigger. For storage
and data transfer, pricing is tiered. The more you
use, the less you pay per gigabyte. For compute,
you get volume discounts up to 20% when you
reserve more.


Pay even less as AWS grows


AWS constantly focused on reducing data center
hardware costs, improving our operational
efficiencies, lowering our power consumption,
and lowering the cost of doing business.
Growing economies of scale result in passing
savings back in the form of lower pricing.



Amazon Elastic Compute Cloud (EC2)
Amazon EC2 is a web service that provides secure,
resizable compute capacity in the cloud.
Amazon Lambda
AWS Lambda lets you run code without provisioning
or managing servers. You pay only for the compute
time you consume.
Amazon S3
Securely collect, store, and analyze their data at a massive
scale. Amazon S3 is object storage built to store and
retrieve any amount of data from anywhere.


Amazon DynamoDB
Amazon DynamoDB is a fast and flexible database service
for all applications that need consistent, single-digit
millisecond latency at any scale.
It is a fully managed cloud database and supports both
document and key-value store models.

Amazon Relational Database Services
Amazon RDS, or Relational Database Services,
is a managed service that makes it easy to set up,
operate, and scale a relational database in the cloud.

A relational database is a collection of data items with
pre-defined relationships between them.
These items are organized as a set of tables with columns
and rows. Tables are used to hold information about
the objects to be represented in the database.
Each column in a table holds a certain kind of data and
a field stores the actual value of an attribute.
The rows in the table represent a collection of related
values of one object or entity.
Each row in a table could be marked with a unique
identifier called a primary key, and rows among
multiple tables can be made related using foreign keys.

What is an SQL?
SQL or Structured Query Language is the primary
interface used to communicate with Relational Databases.


Amazon API Gateway


Amazon API Gateway is a fully managed service that
makes it easy for developers to publish, maintain,
monitor, and secure *APIs at any scale.

What is an API?
API stands for Application Programming Interface.
An API isn’t the same as the remote server — rather it is
the part of the server that receives requests and sends
responses. When a company offers an API to their
customers, it just means that they’ve built a set
of dedicated URLs that return pure data responses .


Amazon Machine Images (AMI)


This service provides the information required to launch
an instance, which is a virtual server in the cloud.
You must specify a source AMI when you launch an
instance.
An AMI includes the following:
-A template for the root volume for the instance
(for example, an operating system, an application server,
and applications)
-Launch permissions that control which AWS accounts
can use the AMI to launch instances
-A block device mapping that specifies the volumes
to attach to the instance when it's launched.


Amazon Elastic Load Balancing (ELB)
This service automatically distributes your incoming
application traffic across multiple targets, such as EC2
instances. It monitors the health of registered targets
and routes traffic only to the healthy targets.
Elastic Load Balancing supports three types of load
balancers: Application Load Balancers, Network Load
Balancers, and Classic Load Balancers.
Amazon Virtual Private Cloud (VPC)
This service enables you to launch Amazon Web Services
(AWS) resources into a virtual network that you've
defined. This virtual network closely resembles a
traditional network that you'd operate in your own
data center.

Amazon Route 53


This service is a highly available and scalable cloud
Domain Name System (DNS) web service.
It is designed to give developers and businesses an
extremely reliable and cost effective way to route end
users to Internet applications by translating names
like www.example.com into the numeric IP addresses
like 192.0.2.1 that computers use to connect to each other.
Domain Name System (DNS) - translates human readable
domain names (for example, www.amazon.com) to machine
readable IP addresses (for example, 192.0.2.44).)
Amazon CloudFront


This service is a web service that speeds up distribution
of your static and dynamic web content, for example, .html,
.css, .php, image, and media files, to end users.
CloudFront delivers your content through a worldwide
network of edge locations.
When an end user requests content that you're serving
with CloudFront, the user is routed to the edge location
that provides the lowest latency, so content is delivered
with the best possible performance.
Latency - the delay before a transfer of data begins
following an instruction for its transfer.
Amazon CloudWatch


This service provides a reliable, scalable, and
flexible monitoring solution that you can start using
within minutes. You no longer need to set up, manage,
and scale your own monitoring systems and infrastructure.
Use CloudWatch to monitor your AWS resources and the
applications you run on AWS in real time, to send system
events from AWS resources to AWS Lambda functions,
Amazon SNS topics, streams in Amazon Kinesis, and
other target types, and to monitor, store, and access your
log files from Amazon EC2 instances, AWS CloudTrail,
or other sources.
AWS Elastic Beanstalk


You can quickly deploy and manage applications in the
AWS Cloud without worrying about the infrastructure
that runs those applications.

AWS CloudFormation
This service enables you to create and provision
AWS infrastructure deployments predictably and
repeatedly. It helps you leverage AWS products such
as Amazon EC2, Amazon Elastic Block Store, Amazon
SNS, Elastic Load Balancing, and Auto Scaling to build
highly reliable, highly scalable, cost-effective applications
in the cloud without worrying about creating and
configuring the underlying AWS infrastructure.



AWS Command Line Interface (AWS CLI)


This service is a unified tool that provides a consistent
interface for interacting with all parts of AWS. AWS CLI
commands for different services are covered in the
accompanying user guide, including descriptions,
syntax, and usage examples.


Amazon Case Studies


Discovery Communications
The Challenge
Discovery needed to upgrade its website infrastructure,
but wanted to avoid a costly upfront one-time expense for
updating their hardware.
Upgrading would have taken considerable time to accomplish
for a three-person team from Discovery Communications,
between acquiring the hardware, configuring it, and moving
the data to the new system. Discovery also had multiple
delivery engines powering their websites, and wanted to
consolidate to make their infrastructure easier to manage.
In addition, the company needed a solution that would allow
them the flexibility to pay for only what they used, and the
ability to scale quickly to meet demand.
Why AWS
Discovery assessed multiple cloud solutions, but none offered
the flexibility and pricing of AWS. “AWS was the most mature
offering available,” says Igor Brezac, Chief Systems Architect,
Digital Media. “The pricing was excellent. We were also
attracted by the ability to get new instances up and running
at a moment’s notice.” Discovery is now running all of its
services on AWS for its US-based digital properties.
Discovery Communications is running about 150 instances
of Amazon Elastic Compute Cloud (Amazon EC2), all
of which use Amazon Elastic Block Service
(Amazon EBS) storage. Discovery uses Amazon Machine
Images (AMI) that are built with a custom version of
Ubuntu (OS). Amazon Elastic Load Balancing
(Amazon ELB) handles load balancing both externally
and internally for Discovery, inside the Amazon Virtual
Private Cloud (Amazon VPC). The company uses Amazon
Simple Storage Service (Amazon S3) to store static
content and host a few websites. Discovery also uses Amazon
Route 53 in combination with Amazon ELB for its domain
name service. Discovery’s static assets are delivered globally
by Amazon CloudFront’s distributed edge servers.
In addition, Discovery also uses Amazon CloudFront’s dynamic
content acceleration feature for services like image resizing
service and the new Discovery website. “Having a content
delivery network (CDN) that delivers both static and dynamic
content, including API acceleration, was important to us,”
Brezac says.


Discovery began implementing AWS in January 2012, and
completed migrating the site in June 2013. “We migrated more
than 40 sites to AWS without missing a beat,” Brezac says.
“We now host all our digital media on AWS. Using the AWS Cloud
gives us great capacity to expand or shrink our infrastructure
as business requirements change—we now have an easy way to
re-architect any of our sites.”
“Without AWS, it would be harder to focus on business
initiatives without having to manage hardware and
infrastructure,” Brezac said. In addition, the Digital Media
division has evolved from administrators to system engineers,
growing their skills and providing more benefit to the company.
Discovery Communications particularly values the horizontal
scaling that AWS makes possible. “We’re able to scale to each
part of the stack horizontally,” says Eric Connell, Senior
Systems Engineer. “So if we’re running out of capacity in
any piece of the stack, that piece of the stack automatically
scales up to increase capacity.”
“Without using the AWS API and services, we wouldn’t be able
to provide our staff with the tools we do,” concludes Shawn Stratton,
Senior Systems Engineer / Architect. “Our entire continuous
delivery system and our development platform are built around
using the AWS API.”
Discovery uses CDNs for static, dynamic, and API delivery.
“Amazon CloudFront was able to offer us the scalability and
low latency we expect from a CDN with cost savings of
20-25 percent and better manageability,” Brezac says.
“Amazon CloudFront APIs and tight integration with other
services like Amazon S3, Elastic Load Balancing, and
Amazon Route 53 have helped us easily get started and
manage our content delivery.”


Kaplan
The Challenge
Today Kaplan consists of many divisions with varying IT
infrastructure needs and fluctuating usage patterns, including
Kaplan’s Test Prep division (KTP), which prepares students for
admissions tests like the SAT or ACT. To support KTP, Kaplan
was running its development and testing environments in a
*Tier 1 collocated datacenter in New York City. When Tropical
Storm Sandy swept through the city, the hosting center went
down for approximately two weeks.
*A Tier 1 data center is the basic-intermediate level of
data center tiers. A Tier 1 data center only has essential
components or data center infrastructure and is not suited for
enterprise or mission critical data center services, as it lacks
any redundant source of servers, network/Internet links,
storage, power, and cooling resources. Typically, a Tier 1 data
center guarantees 99.671 percent availability and has an
average of 28.8 hours of downtime per year.


“Thankfully, our production environment remained
operational, but having to worry about what could happen
was always on our mind,” says Kaplan Executive Director of
Technology Services Chad Marino. Kaplan’s manual backup
and recovery resources were also based in New York City.
“Having our backup environment in the same city as our
production environment is also a major concern that needed
to be addressed,” explains Marino.
Additionally, as the business grew in size and its IT
architecture increased in complexity, it became progressively
difficult for Kaplan to meet security standards and
organization compliances. Kaplan needed to find a flexible IT
infrastructure that would allow it to grow while improving
overall resiliency, security, and agility.


Why AWS
Kaplan was running 12 different data centers across the
organization and started moving its applications to AWS to
consolidate its infrastructure. According to Marino,
“One of the things driving us to move to the cloud was dealing
with end-of-life hardware and running out of space in our data
center.”
Kaplan was also attracted to the maturity of AWS offerings.
Amazon Relational Database Service (Amazon RDS)
allows our database admin team to focus less on the day-to-day
maintenance and use their time to work on enhancements.
And Elastic Load Balancing has allowed us to move away from
expensive and complicated load balancers and retain the required functionality,” says Marino.
Migrating to The Cloud
Tropical Storm Sandy prompted the company to migrate
KTP and additional shared services, part of the Kaplan Higher
Education and Kaplan International divisions, to AWS,
totaling up to 900 GB of data. “We started in May 2013 by
moving the development, quality assurance, and staging
environments to AWS,” says project manager Ravi Munjuluri.
“We completed that part of the transition by October and
began planning the production migration. By January 2014,
we began moving the pieces of the application stack in the
production environment over one by one to minimize the
impact on the business. Our final push was in August and
it all occurred over a weekend. We started on Friday and
were up and running by Sunday morning.”
As part of the move to the cloud, Kaplan migrated about
50 applications and 50 sub-applications within those
applications in its stack. In the collocated data center,
the division used a storage area network (SAN) to connect
to servers, processors, and the Solaris operating system to
six Oracle Database 10g and Windows SQL databases.
Kaplan migrated its application stack to Amazon Virtual
Private Cloud (Amazon VPC), hosting the databases on a mix
of Amazon Elastic Compute Cloud (Amazon EC2) instances
using Amazon Linux Machine Images and Amazon Relational
Database Service (Amazon RDS) for Oracle. Marino says,
“Our goal is to move completely to RDS for all databases for
ease of management and resizing capabilities.”
To monitor its resources, Kaplan uses Amazon CloudWatch,
a service that collects and tracks usage metrics and manages
alarms. Using CloudWatch also allows the company to optimize
its resources by, for example, right-sizing its instances when
utilization rates fall.
The Kaplan team designed the migration of data around
Oracle’s built-in tools. “We used AWS PERL scripts to migrate
the data, which were really great,” says Avi Hack, director of
systems architecture and engineering. With the combination
of scripts and AWS Elastic Beanstalk, the company was
able to automate time-consuming processes and pre-stage
the migration environment, which made the overall process
much faster and easier.
As part of its migration, Kaplan decided to leverage multiple
AWS regions and *Availability Zones, including some in the
United States, Asia Pacific, and Europe. The company uses
Amazon Route 53 as its DNS solution to route user traffic to
the nearest Availability Zone and as a result improve the
overall user experience, reducing latency. “Using multiple
regions allows us to put our data closer to the customer for a
better end-user experience,” says Marino.
*Availability zones (AZs) are isolated locations within data
center regions from which public cloud services originate and
operate. Regions are geographic locations in which public cloud
service providers' data centers reside.


Kaplan’s preparation made much of the transition to AWS
seamless. Kaplan also leaned on AWS Support,
Business-level, throughout the migration for
acknowledgement and best practices. “Leveraging AWS
Support has been key in addressing issues that we may
experience,” says Marino.
More than 250 people from development, operations,
architecture, and database teams were involved in the
migration to AWS. “In order to plan a migration of this
size, it is critical to work with all teams within IT to pull
it off, from development down to the infrastructure
operations team,” says Marino.
After moving the KTP division to AWS, Kaplan sold the
legacy equipment and closed the data center. The company
continues to re-architect applications for various divisions
as it continues migrating to AWS, and today Kaplan has
reduced its data center footprint from 12 to 4 facilities.


Benefits
Besides a more reliable infrastructure and less latency, Kaplan
has also gained better insight into the cost of its applications
and systems. “By tagging all instances in AWS, we are now able
to look at specific costs from the application layer down to
every resource associated with an application. This has allowed
us to surface the hidden costs for operating applications,” says
Marino.
Kaplan anticipates further improvement to the
development process using AWS. Hack says,
“By using AWS CloudFormation and the
AWS Command Line Interface (CLI), we have a level
of control and standardization that we could not achieve
within our on-premises data centers. We can now easily
spin up environments and remove them when we are
finished with them.” Marino explains, “This allows us to
take advantages of the strength of AWS while maintaining
the strengths of our on-premises data center, and gives
our developers the time to retool our applications to run
on AWS.” The Kaplan team says it will continue to look
for opportunities where it makes sense to move systems
and applications away from traditional data centers and
into the cloud.

NASA
NASA began providing online access to photos, video, and
audio in the early 2000’s, when media capture began to
shift from analog and film to digital. Before long, each of
NASA’s 10 field centers was making its imagery available
online, including digitized versions of some older assets.
Therein was the challenge: “With media in so many
different places, you needed institutional knowledge of
NASA to know where to look,” says Rodney Grubbs,
imagery experts program manager at NASA. “If you
wanted a video of the space shuttle launch, you had to go
to the Kennedy Space Center website. If you wanted
pictures from the Hubble Space Telescope, you went to
the Goddard Space Flight Center website. With 10
different centers and dozens of distributed image
collections, it took a lot of digging around to find what you
wanted.”
Early efforts to provide a one-stop shop consisted of
essentially “scraping” content from the different sites,
bringing it together in one place, and layering a search
engine on top. “In large part, those initial efforts were
unsuccessful because each center categorized its imagery
in different ways,” says Grubbs. “As a result, we often had
five to six copies of the same image, each described in
different ways, which made searches difficult and delivered
a poor user experience.”


In 2011, NASA decided that the best approach to address
this issue was to start over. By late 2014, all the necessary
pieces for a second attempt were in place:


• The Imagery Experts Program had developed and published
a common metadata standard, which all NASA’s centers had
adopted.
• The Web Enterprise Service Technologies (WESTPrime)
service contract, one of five agency-wide service contracts
under NASA’s Enterprise Services program, provided a
delivery vehicle for building and managing the new site.
• The Federal Risk and Authorization Management Program
(FedRAMP), which provides a standardized approach to
security assessment, authorization, and continuous monitoring
for cloud products and services.


“We wanted to build our new solution in the cloud for two
reasons,” says Grubbs. “By 2014, like with many
government agencies, NASA was trying to get away from
buying hardware and building data centers, which are
expensive to build and manage. The cloud also provided the ability to scale with ease, as needed, paying for only the capacity we use instead of having to make a large up-front investment.”
Development of the new NASA Image and Video Library
was handled by the Web Services Office within NASA’s
Enterprise Service and Integration Division. Technology
selection, solution design, and implementation was
managed by InfoZen, the WESTPrime contract service
provider. As an Advanced Consulting Partner of the AWS
Partner Network (APN), InfoZen chose to build the solution
on Amazon Web Services (AWS). “Amazon was the largest
cloud services provider, had a strong government cloud
presence, and offered the most suitable cloud in terms of
elasticity,” recalls Sandeep Shilawat, Cloud Program
Manager at InfoZen.
NASA formally launched its Image and Video Library in March
2017. Key features include:


• A user interface that automatically scales for PCs, tablets,
and mobile phones across virtually every browser and
operating system.
• A search interface that lets people easily find what they’re looking for, including the ability
to choose from gallery view or list view and to narrow-down search results by media type and
/or by year.
• The ability to easily download any media found on the site—or share it on Pinterest,
Facebook, Twitter, or Google+.
• Access to the metadata associated with each asset, such as file size, file format, which
center created the asset, and when it was created. When available, users can also view
EXIF/camera data for still images such as exposure, shutter speed, and lens used.
• An application programming interface (API) for automated uploads of new
content—including integration with NASA’s existing authentication mechanism.
The NASA Image and Video Library is a cloud-native solution, with the front-end
web app separated from the backend API. It runs as immutable infrastructure in a
fully automated environment, with all infrastructure defined in code to support
continuous integration and continuous deployment (CI/CD).


In building the solution, InfoZen took advantage of the following Amazon Web Services:

Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable
compute capacity in the cloud. This enables NASA to scale up under load and scale down
during periods of inactivity to save money, and pay for only what it uses.
Elastic Load Balancing (ELB), which is used to distribute incoming traffic across
multiple Amazon EC2 instances, as required to achieve redundancy and fault-tolerance.
Amazon Simple Storage Service (Amazon S3), which supports object storage for
incoming (uploaded) media, metadata, and published assets.
Amazon Simple Queue Service (SQS), which is used to decouple incoming jobs from
pipeline processes.
Amazon Relational Database Service (Amazon RDS), which is used for automatic
synchronization and failover.
Amazon DynamoDB, a fast and flexible NoSQL database service, which is used to track
incoming jobs, published assets, and users.
Amazon Elastic Transcoder, which is used to transcode audio and video to various
resolutions.
Amazon CloudSearch, which is used to support searching by free text or fields.
Amazon Simple Notification Service (SNS), which is used to trigger the processing
pipeline when new content is uploaded.
AWS CloudFormation, which enables automated creation, updating, and destruction of
AWS resources. InfoZen also used the Troposphere library, which enables the creation of
objects via AWS CloudFormation using Python instead of hand-coded JSON—each object
representing one AWS resource such as an instance, an Elastic IP (EIP) address, or a security
group.
Amazon CloudWatch, which provides a monitoring service for AWS cloud resources and
the applications running on AWS.


Through its use of AWS, with support from InfoZen, NASA is making its vast wealth
of pictures, videos, and audio files—previously in some 60 “collections” across NASA’s
10 centers—easily discoverable in one centralized location, delivering these benefits:


Easy Access to the Wonders of Space. The Image and Video Library automatically
optimizes the user experience for each user’s particular device. It is also fully compliant with
Section 508 of the Rehabilitation Act, which requires federal agencies to make their technology
solutions accessible to people with disabilities. Captions can be turned on or off for
videos played on the site, and text-based caption files can be downloaded for any video.
Built-in Scalability. All components of the NASA Image and Video Library are built to
scale on demand, as needed to handle usage spikes. “On-demand scalability will be invaluable
for events such as the solar eclipse that’s happening later this summer—both as we upload
new media and as the public comes to view that content,” says Bryan Walls, Imagery Experts
Deputy Program Manager at NASA.
Good Use of Taxpayer Dollars. By building its Image and Video Library in the cloud,
NASA avoided the costs associated with deploying and maintaining server and storage
hardware in-house. Instead, the agency can simply pay for the AWS resources it uses at any
given time.

No comments:

Post a Comment

To All, If you are working in starter accounts please move the work into the classroom account so I can review. For the previous ...