Search This Blog

Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Thursday, 19 February 2015

Migrating Enterprise Apps to AWS? Standardize your software estate

More and more CIOs and enterprise IT managers are under pressure to get some portions of their IT landscape onto public clouds, notably AWS. Typically, the first step in their journey is an assessment of the existing app landscape, and to segment and classify apps that are feasible for cloud migration. The actual migration may be simple lift and shift, a minor re factor, or a complete re architecture, depending upon the application technology and the benefits of moving it to cloud. These approaches are becoming common, with lots of cloud migration vendors and professionals adopting it. However, the migration itself offers an opportunity for the IT to streamline and standardize their application landscape. Aspects of driving standardization, base lining criticality, redefining development/QA usage and revisiting security guidelines are worth considering prior to migration of the application landscape, as it will ensure that the cloud move is robust, stays relevant and delivers the expected benefits. These aspects are over and above the business, technical and operational related facets that IT professionals consider while assessing cloud migration feasibility.
One such aspect is standardization of the software estate. Large enterprises often have offices in many geos, with IT resources being provisioned and managed by local IT teams or local datacenter providers. This tends to cause a sprawl in the software estate, with different versions of operating systems, databases & middleware being used. This is normally dictated by the applications run (typically supported by local vendors), vendors selected (both of infrastructure as well as support) and the preference of application teams to refresh to the latest versions.
While migrating such scores of applications to cloud, it is worthwhile to take stock, define and restrict the versions to a minimum, especially when moving to AWS. One approach is to define a recommended software stack version and a minimum software stack version. The idea is to push application teams to conform to the recommended software stack version, and give an exception to fall back to a minimum software stack version.
For example, a recommended software stack version can be Windows 2012 with SQL 2012, while the fall back version may be Windows 2008 with SQL 2008. The advantage here is that AWS offers all these versions at the same cost. Hence it encourages application teams to look at application upgrades and refreshes, without worrying about the base software upgrade costs. And it makes it easier for the enterprise IT to ensure upgrades and security patches are completed on time with the ongoing support for the current versions.
The standardization exercise is best done after analyzing a significant part if not the entire estate. This is to ensure that the standardized estate is small and manageable, and does not have many variations that will make manageability difficult. The standardization should be done with an aim to publish a catalog of software’s to the various application teams that they can use for existing and newer applications. This is an important step for Enterprise IT towards delivering IT as a Service.

Tuesday, 30 December 2014

Disaster? The cloud comes to the rescue


Leveraging AWS for quick and efficient DR sites

Disaster usually strikes without a warning. Most enterprises try to insure against all forms and types of disaster by various means and methods that have worked in their previous experience. In most cases the Business Continuity Plan works on one principal- segment, duplicate and secure.
The most common strategy for disaster recovery is segmentation of teams and duplication of infrastructure and data of teams and data. Essentially, positioning teams and data centres in different locations is generally considered the best DR strategy. It is assumed that disaster will not strike in two or more places simultaneously, so safety is assured! But what all these methods don’t promise is foolproof recovery at reasonable cost.
The cost of redundant IT infrastructure and the additional team adds to the budget too. It is not only a capital investment for most enterprises but there is also an opex component of running and maintaining DR sites on an ongoing basis. In addition, there are instances where the regulation needs them to have their DR sites located at specified regions, depending on the size of the company. These sites need to be then maintained and audited too, for compliance to BCP strategies. The sheer volume of the planning and maintenance that goes into the upkeep of these multi-locational sites is an expensive logistics and operational nightmare. Enterprises tend to compromise on their DR strategy due to these financial and operational challenges.
The cloud provides the perfect solution to all these issues, while ensuring multiple advantages over physical data centres even in multiple locations. The biggest advantage is that while creating a DR on cloud, only the requisite infrastructure needs to be provisioned and paid for on an hourly basis. Enterprises save additional costs when compared to a physical data centre as well, because an on premise data centre has fixed server space and if it’s not utilized, it’s again a wasted cost.
In fact the public cloud is an ideal environment for disaster recovery planning, since it is pay per use, completely scalable and easier to manage than multiple locations.
AWS provides the flexibility and scalability to setup DR based on the business requirement, to have either a mission critical always on setup with the complete environment provisioned and running, or have a bare minimum setup with critical data infrastructure replicating with options to scale business applications as required. This flexibility allows business, after disaster recovery, to be up and running within a few hours, if not minutes. CSS Corp helps define the right DR strategy for the customers based on their RPO and RTO requirements, and help them implement and manage a right kind of DR, be it a light weight cost effective pilot light, to a fully functional DR catering to mission critical business. CSS Corp support also provides a fully integrated automated platform using AWS API that facilitates the provisioning and configuration of infrastructure resources, with consistency of recovered data, and executes periodic drills to ensure that DR setup meets the RPO and RTO objectives.
CSS Corp has saved time, cost and effort in ensuring data recovery using – AWS, helping achieve higher savings and greater efficiencies for a number of our clients, for their Data recovery processes.
CSS Corp has designed, deployed and manage DR environments on AWS for several customers, ensuring always on operations by meeting their RPO and RTO requirements, and saving their capital and operational expenses through efficient and effective DR implementations.

Launch a Campaign in 10 minutes

On Demand Marketing Campaigns on AWS
Marketing is the most strategic activity for any organisation that has a product or a service to sell. As any successful marketer will endorse, an intelligent online presence is a pre-requisite for a successful campaign. To ensure the success of a campaign, a systematic planning process needs to be applied. So typically, in order to launch a product successfully, the team needs to chart out a roadmap for the activity – starting with the IT infrastructure requirement (software and hardware) for a website. The servers, storage, hosting space required, networking equipment or applications and robust databases that would drive the campaign usually top the list of requirements, as well as costs.
Budget discussions follow in order that the procurement process can be eased into the timeline. These usually take the course according to the management policies. In most organisations the whole process takes between 4 to 8 weeks. This is usually the lead time for a good marketing campaign launch from ground up.
But once it is launched, the life of a campaign is much shorter. During the planning period as well as post the launch, there is always an inherent wastage of resources, since the infrastructure needn’t operate at 100% all the time. This, wastage of course, translates into costs. In addition, once launched, there will be business peaks and troughs, and these may be an added cost. Peaks- because extra servers is required which needs to be accurately forecasted and provisioned for, and troughs because it adds resources idle cost.
In order to fight these challenges of cost, time, scalability and security, a cloud based web infrastructure is just what is needed. It seamlessly connects users at the lowest possible cost, ensuring speed, ease and economies of scale. In one real life engagement, with the help of the cloud, the time for launching the website reduced seven times while the annual cost was cut by 60%. That is the impact of a change to the cloud for launching a marketing campaign.
The advantages that the cloud platform offers to a marketing campaign activity are numerous. It allows for a high level of automation that drastically reduces costs, helping deploy new infrastructure in as little as 48 hours (as against an earlier 4 weeks program), and hence at a fraction of the cost. The cost advantage of the pay per use model or that offers on-demand capacity is a big part of the advantage too.. With costs savings at every step, the AWS cloud ensures stability of the infrastructure with no user disruption issues even for peak loads of usage. Additional computing capacity can be seamlessly provisioned for on the cloud, with the promise of a compliant and secure environment.
The support that CSS Corp offers to the cloud provided by AWS translates into additional advantages over the basic marketing platform – which is essentially about a marketing environment needed for hosting websites. Fast, robust and able to meet peak business demands, the marketing platform on AWS can be monitored on a continuous basis to ensure the highest threat mitigation abilities. With the cost savings it ensures, using the AWS cloud for marketing campaign activities can be the first step towards a successful campaign, one that has already started saving costs!

Vulnerability management for cloud computing


Vulnerability management is the process of identifying, classifying, remediating, and mitigating vulnerabilities, especially in software and firmware. Vulnerabilities are found in every IT asset that’s developed & it has to be identified at the early stages of development & remediated. Vulnerability management is integral to computer security and network security. The so-called INTERNET – a massive collection of interconnected computers has many discovered & undiscovered vulnerabilities at multiple layers.
Cloud operates at the soft layer & its consumed via internet which makes it have a direct impact on any implications to software & Internet.
In a public IaaS environment, businesses don’t get control over the hypervisor because it’s a multi-tenant environment. Without hypervisor control, security needs to be deployed as agent-based protection on the VM-level, creating self-defending VMs that stay secure in the shared infrastructure and that help maintain VM isolation. Although the agents put more of a burden on the host, the economies of scale in a public cloud compensate, and there are additional cost benefits with CAPEX savings and a pay-per-use approach.
Since Cloud computing operates at the soft layer & thats where its being effectively consumed, its more critical to identify the dependency vulnerabilities & remediate them in a timely manner.
Though vulnerabilities can occur in multiple layers, lets discuss about some key vulnerabilities in 2014 that had impact on cloud computing services & consumers.
Hearbleed vulnerability at entry point or point of consumption of web service which is the responsibility of the consumer & service provider at different layers.ShellShock vulnerability at the operating system level which is the responsibility of the consumer.
Xen vulnerability at the hypervisor layer which is the responsibility of the service provider.By Oct 2014 Security researchers have discovered vulnerability in SSL 3.0 that allows attackers to decrypt encrypted connections to websites. This issue affecting SSLv3, known as POODLE or CVE-2014-3566. This is the responsibility of the consumer.

Baseline Cloud Security – an Absolute Imperative

IT security or Computer security is the top priority of any organization or individual consuming the IT as a service. Digital Information that is created, stored or processed has to be secured especially in todays environment where the threat landscape is ever expanding without defined boundaries.
The full impact of its business advantage that a cloud deployment offers can be leveraged only with the right security technologies, tools and processes in place. Since the public cloud has multiple tenants on same hardware, it is critical that an evolved security strategy be charted out and all stakeholders have complete clarity on the levels of functionality, the access control and perimeter.
To ensure that a specific corner of your public cloud deployment / implementation is safe, there are broadly two categories on which it is demarcated- baseline security and improvised security.

Baseline security

Baseline security is the bare minimum security essentials that needs to be considered using cloud resources themselves. These are mandatory essentials that have to be configured irrespective of the scale of deployment. This blog will provide basic guidelines to configure baseline security the right way. Baseline security will not involve additional cost.

Improvised security

Improvised security as the name indicates is an improvisation in security configuration with the use of external 3rd party tools & services. You may incur additional cost based on the service you choose.
We discuss here the baseline security process for the public cloud.
Baseline security for the cloud is an imperative, to be developed along with the cloud footprint that hosts the IT infrastructure or applications. Specific security measures need to be embedded in the process of the cloud build, and in most cases the basic features of baseline security are offered by the cloud vendor.
AWS (Leader as per Gartner’s cloud infrastructure as a service (IaaS) Magic Quadrant 2014), offers a range of services that can be consumed as utility services. The infrastructure on AWS will be secured by soft firewalls called security groups & virtual private network segregation called VPC. Security groups & Network ACL forms the core framework of AWS baseline security as all the network level security configuration happens at these layers.
Identity and Access Management (IAM), Multi-factor authentication (MFA), CloudTrail, and Trusted advisory are other security essentials that AWS offers to determine where a little extra security would be required. As a general practice every deployment has to be segregated into tiers & security policies will be applied to each tier based on the functionality of the tier.
However, this is only the basic security wall, and it needs to be customised according to the enterprise client’s requirement.
Baseline security is logically deployed at the soft network layer in cloud. The security group is a virtual firewall that separates the cloud infrastructure and the user base (or) internet, and monitors access by creating user groups and providing access to specified IPs and ports.
Amazon Web Services, the leader in public cloud sector, also offers an option of a Virtual Private Cloud, which is the enterprise’s own virtual data center on a public cloud with very tight access and user controls. This can be customised to have their own little security meshes, trusted IP address range, subnets and configuration of route tables and network gateways. Here, it is possible to have multiple layers of security that is designed to achieve complete control access to Amazon EC2 instances, in each subnet.
The Trusted Advisory, (full features available for customers who signed for AWS premium support), is a global dashboard of all vulnerabilities loop holes in the infrastructure deployed in AWS cloud that ensures constant monitoring and alerts on gaps in configuration that could develop into threats or risks.
Apart from the infrastructure layer, virtual layer & Network layer there is also another layer called API layer in cloud. Its important to secure your API layer in cloud. Identity and Access Management (IAM) on the cloud facilitates to create users & groups that needs access to AWS resources which thereby helps to eliminate unauthorized external access and controlled internal access. AWS Cloud Trail is another service of AWS that would help us in API level logging. IAM & Cloud trail works hand in hand providing a virtual layer of security at the API layer.
In order to customise the security initiatives on baseline to their needs, enterprises need an efficient cloud integrator, a support organization with cloud expertise & security consciousness that can ensure maximum security on the cloud while ensuring optimal utility for the public cloud in the enterprise.

Staying clear of the fog… while up in the clouds

Cloud Computing is a “daily spoken” & most commonly used terminology in every forum.
The cloud has long since ceased to be an emerging technology, and there is no longer any convincing required for enterprise to understand its virtue. While there is clarity on its value, its versatility is still being discovered by most early adopters.
Cloud computing can simply be described as computing offered as utility service delivered via internet.
By definition, “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction,” according to NIST.
Characteristics of Cloud Computing:
  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service
Essentially, cloud computing as a form of enterprise process is characterised by certain clearly distinctive features. These lend it extremely scalable and agile for enterprise needs. The on-demand self- service allows process flexibility, while broad network access and the ability of pooling resources along with the rapid elasticity by which it can be scaled add value to its ROI. Of course, the biggest advantage to a cloud adoption comes from the cost savings that the measured service model provides.
This is the characteristic that distinguishes a cloud based infrastructure from a physical infrastructure for IT- and that is the layer that can be the biggest cost savings with its pay as you go models.

Service Models:

Based on the layer in which the cloud service is consumed, Cloud computing can be classified as Infrastructure As A Service (IaaS), Platform As A Service (PaaS) & Software As A Service (SaaS)
Infrastructure as a Service is a provision model in which an organization outsources the IT requirements as a whole – the equipment used to support operations, including storage, hardware, servers and networking components. Infrastructure as a service allows the flexibility of using the precise amount of cloud space in a public cloud as needed by the enterprise, and the flexibility to pay only for that. The service provider is responsible for housing, running and maintaining it. And the enterprise user pays as per use.

Deployment Models:

IaaS can be further classified into 4 types as Private cloud, Public cloud, Hybrid cloud, Community cloud based on base design & implementation of the solution. A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. Public cloud services may be free or offered on a pay-per-usage model. Amazon Web Service’s public cloud leads the pack here.
It is immediately clear that if an enterprise will claim a corner of the public cloud, there will need to be some extra security processes in place, since the cloud is a multi-tenancy space. So, the responsibility of the security rests both with the vendor as well as the user enterprise. Due to the rapidly evolving nature of the cloud landscape, as also the threats coupled with the newer means of accessing data, the process of security implementation is an iterative and evolving exercise. Devising the best possible strategy and implementing it at multiple layers, therefore, becomes a shared responsibility.
To obtain complete clarity on this division of responsibility, the SLAs on the engagement between the vendor and the enterprise have to be clearly laid down. The vendors are responsible for all the hard layers of infrastructure and the physical devices beneath, as well as the hypervisor or the virtualization layer. The enterprise customer, on the other hand, is responsible for the operating system’s security and the applications that run on the operating system. The soft layer houses the networking applications, so the network security is also the enterprise’s responsibility.
Thus divided, the security of a cloud infrastructure can be completely covered. Of course, some baseline security features need to be a part of the cloud implementation and are provided by the vendor, but more on that in our next blog.

How High Will The Cloud Take Us?

A couple of years ago, the IT community ran out of superlatives to use for benefits that cloud computing brought in. It was the proverbial apple that would fall on our top lines…and make everything all right again. The Cloud would help cut costs, streamline processes, and provide secure, infinite space for hosting apps, storage and everything else in between.
Then the hype settled, adoption began, slowly at first and now, IT teams have a steady relationship with the Cloud – it’s here to stay, love it or have doubts on its faithfulness.
Today, after the euphoria has cleared, there are some points that come up, points that could prick the bubble, or hold up the tree. Things that will have to be accepted – like the security risks in these galloping adoption, the question of integration with the existing environment – are now on the forefront of debates. The community is sitting up and giving increasing attention to security applications for the Cloud. This has created a blossoming market for cloud security applications, and at this point, it is largely a matter of choosing the correct one for your enterprise, to ensure the maximum utility is derived from the adoption, without any risks.
However, looking beyond the hype, doubts still persist; it’s like sharing an heir- specially the one that will be inheriting the title, with an outsider. The new kid on the block is amazing, but as a service platform, how will it integrate into a heterogeneous application environment? How will it deliver to the market expectations of secure, consistently available, high performing applications? Within its efficiency, how can an enterprise decide the correct mix- what part private, what part public, how much hybrid? What cocktail of Iaas, Paas and Saas will be the best optimisation model that will work for me? The obvious solution is to chalk out a clear, balanced, enterprise framework that will take into account all the advantages, assuage all the negatives and then decide which way to grow, and how much!
What should be the points to ponder before porting business critical applications and operations on a Cloud environment? How much risk can be taken, and where is the line to be drawn? There needs to be a clear framework for Cloud adoption in any enterprise or the loose ends can trip up the entire migration. Savings and all the benefits can only accrue when the entire adoption process takes care of every concern.
What is imperative for regulating all of these expectations and opinions are Cloud based service matrices that can lay down the SLAs and a clear governance structure that can ensure compliance.