NewswireToday - /newswire/ -
Boulder, CO, United States, 2010/02/24 - Amadeus Consulting, a custom software and application development company offers Cloud Computing, the ability to retrieve mass data at little cost.
Businesses can subscribe to specific applications in "the cloud" (such as e-mail) but there is growing momentum to provide raw processing and storage capability so that any custom software application can run remotely without typical constraints. The number and diversity of applications moving online is increasing.
To support this demand the underlying infrastructure and business tools for hosting online applications are maturing. The two working together are feeding off each other thus accelerating overall adoption. Cloud computing allows for code and storage to exist on the Internet ("the cloud") as a service on a series of devices that by design appear as a single device. This abstracts software from hardware concerns. Cloud computing has existed for some time for research purposes but general purpose business-grade clouds (services that include a service level agreement (SLA)) are fairly recent - notably with Amazon entering the market in 2002 with its Elastic Compute Clouds - still in Beta. (1)
Most applications and custom software in use revolve around storage capabilities more than the processing - although processing is what most businesses really need to be dynamic. Systems include real-time offline backups and disaster recovery, massive image storage and audio / video streaming such as Amazon's UnBox.
Performance-hungry applications such as financial number-crunching and design rendering are also users of large computing facilities such as rendering farms. Eventually most Web-based applications that are found in data centers will run in the cloud.
• Dynamic Capacity -dynamically allocate (up or down) computer resources on the fly - even by the software or application itself
• Dynamic Instant Sizing -virtually build an instance with any virtual hardware configuration
• Reliability - dependably managing and saving thousands of servers online requires a business to have the highest level of controls and standards
• Network portability - portability through abstracted hardware and removed or reduced network constraints such as hard-coded IP addresses
• Geographical redundancy - ability of most companies to offer any sort of cloud computing service is geographically redundant.
• Great Price - partially due to promoting a new service but also due to massive economies-of-scale - making the price almost impossible to replicate. Additional savings also include no longer having to build systems to handle maximum load.
• Convenience - conveniently enter in a credit card number and a system is set-up in a couple of minutes. It's appealing to anyone who has ever tried working with a data center to configure a massive disk system or anything over a couple of servers in a simple configuration and knows it is very time-consuming and complex.
• No concerns about correctly sizing hardware to maximum loads fequently caused by unpredictable business cycles and macroeconomic forces
• Easily and correctly determine how to allocate spending within a single device between options such as memory, disk and processing power
• Offsite storage gives business customers the ability to increase redundancy by remotely located data
• It will may harder for some OS/Programming stacks to fully convert to cloud computing. Amazon announced in late 2008 that Windows will be supported even though commercial cloud systems supporting Linux have been available since 2002. That doesn't restrict other OS/Programming stacks from utilizing services provided in the cloud but rather prevents those services from running in the cloud.
• There is less control of the hardware environment as currently available in a data center. The subscriber must rely 100% on the provider to physically secure the hardware and its access. Unlike a traditional data center, it is impossible to augment hardware to add additional physical and logical layers of security.
• Controls will have to be put in place to mitigate bad code because unlike a traditional hardware environment, bad code is not physically constrained to a single machine and could end up consuming vast amounts of resources if the system is configured to expand on demand.
• Although the idea is to isolate developers from hardware constraints and concerns most developers don't grasp what is required to efficiently scale a system to a large processing footprint.
Why technologists care:
• Hardware capabilities and their cost directly influence programming paradigms
• Infrastructure support staff will have another option to the traditional data center
• No hassling with complex infrastructure to scale. Easy to set up, pay as you go, high availability and no long-term commitments.
• Allows different distributed providers to do what they do best (division of labor). It is possible to have one system run the code and another remote system store the data.
Although there are few companies (Google, Microsoft, Amazon) that have the vast amount of resources and expertise required to construct the underlying infrastructure required, solutions exist that provide subsets of the benefits:
• Virtual Machines (also referred to as VMs) are being adopted rapidly as the technology matures. Zen, Microsoft, and VMWare are great for running multiple environments on one machine but can't span multiple machines - although they do allow for a great deal of hardware abstraction most notably demonstrated by ease in which an IT administrator can move a system from machine to machine (in some cases even while users are attached to a running application).
• Super computers solve some of the world's most complex problems such as modeling weather but they require special coding and typically only run a few programs at a time. Super computers are mostly used by the military and research facilities.
• Volunteer Peer-to-Peer networks such as Seti@home is an example of a massive distributed application with massive computational power; but distributed volunteer networks are hard to provision and control so they are mostly used for research.
About Amadeus Consulting
Amadeus Consulting (AmadeusConsulting.com) is a custom software development company dedicated to creating intelligent technology solutions with successful business results. We are a Microsoft Gold Certified Partner, winner of the Microsoft Office XP Challenge, and has Microsoft Partner Competencies in Custom Software and Data Management Solutions. Amadeus Consulting is an expert in custom software applications such as content management systems, e-commerce, surveys, social networking sites, data collection and management, browser plug-ins and many more.
Keywords: Virtual Machines (VM), Grid computing, utility computing, "The Cloud", Hardware as a service (HaaS), Amazon, Google, Nirvanix.