- Security of data
- Cost of network getting to the cloud
- Managing capacity/performance
- Monitoring for availability and diagnosing issues
I believe that with standard metrics of CPU, internal messaging and I/O, we can forecast the required capacity and associated performance by leveraging performance testing and capacity modeling in the lab. I have used a similar approach to defining capacity for real metal environments and I believe it provides reasonable estimates for private clouds. Let me know if you have an interest in discussing this.
Monitoring availability and diagnosing issues can be addressed with colocation of a subset of the production environment that is privately managed. Alternatively, if you are using Gigaspaces, Dynatrace has a product offering that they indicate allows you to view utilization of your Gigaspaces XAP implementation on Amazon's EC2. You would still need to manually correlate the end-user experience with the utilization shown in Dynatrace. See http://blog.dynatrace.com/2009/05/07/proof-of-concept-dynatrace-provides-cloud-service-monitoring-and-root-cause-analysis-for-gigaspaces/.
Cost of network getting to the cloud can be addressed in many ways and measured accurately.
I believe that security of data will be addressed with defined process and auditing of cloud providers in the same way that we address it today with other outsourced services, e.g., a SAS-70 audit.
No comments:
Post a Comment