Dan,
Good line of thought. Couple of points:
a) Power efficiency – I think an 8 core machine (with 7.5 VM processes) would use less power than 8 small machines (say powered by via itx). But if one is using only one VM and one process, then the rest of the power is wasted
b) Yes, from an enterprise IT application infrastructure perspective, virtualization s a short term solution with a cloud infrastructure as the long term goal
c) Virtualization in some sense is getting more granularity than a hardware box (for better utilization) and that would eventually shift to the infrastructure providers
d) BTW, IMHO, elasticity was never the goal of virtualization not can we achieve elasticity by virtualization – they are orthogonal
e) Same goes with scale and ad-hocness
f) And as you point out, virtualization has serious drawbacks – like multi-core and extra overhead.
Cheers & happy holidays
<k/>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com
To post job listing, send email to jo...@cloudjobs.net (position title, employer and location in subject, description in message body) or visit http://www.cloudjobs.net
To submit your resume for cloud computing job bank, send it to res...@cloudjobs.net.
For more options, visit this group at
http://groups.google.ca/group/cloud-computing?hl=en?hl=en
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
This group posts are licensed under a Creative Commons Attribution-Share Alike 3.0 United States License http://creativecommons.org/licenses/by-sa/3.0/us/
Group Members Meet up Calendar - http://groups.google.ca/group/cloud-computing/web/meet-up-calendar
-~----------~----~----~----~------~----~------~--~---
So, do we expect Google to take 'Native Client' and make 'Native
Cloud' out of it once we're done beta testing it for them?
--chris
This is accomplished by allowing the Guest to run in Ring 0 of the processor
Virtualization is a means to disconnect the running application from the underlying hardware. Its an enabling technology used by all the cloud computing
We select servers based upon cost of CPU to Ram to Power and Space ratio. And that changes over time. Virtualization also gives us the freedom to select hardware at todays prices, and not feel locked into one vendor.
On Wed, Dec 17, 2008 at 10:37 AM, Niels Goldstein <niels.g...@oracle.com> wrote:
...etc
Virtualization is a means to disconnect the running application from the underlying hardware. Its an enabling technology used by all the cloud computing
We select servers based upon cost of CPU to Ram to Power and Space ratio. And that changes over time. Virtualization also gives us the freedom to select hardware at todays prices, and not feel locked into one vendor.
So I buy some of these arguments, but by no means all. If you believe folks like James Hamilton, datacenter buildout at scale means you can reevaluate almost all your hardware assumptions without adding significant cost to the buildout, and with the potential of oom-sized operational benefits. It's hard for me to believe that current enterprise-focused hardware (eg blade servers, big sun or ibm boxes, etc) are what you'd choose.
gb >> “Commodity” server basically means x86. We are leveraging the open nature of the Intel x86 instruction set across many systems and abstracting the differences through OS virtualization. Ultimately you want zero variation.
Virtualization (assumed to mean os virtualization) clearly can ease operational cost for a legacy application stack, but it is a pretty blunt instrument to apply to things like workload management - you're essentially maximizing the size of your movable state, introducing an incredibly coarse-grained locking infrastructure, and adding considerable management complexity (eg blowing out your spanning trees just to preserve the mac you had, because the os doesn't expect that to change in a single tick) in exchange for preserving your current architecture. That's more than just a 5% runtime penalty.
gb>> Workload management is all the VMM does. It manages the memory space and schedules itself across available cores (XEN a little more configurable) as efficiently as possible as most applications are not designed very well to be multi-processer/multi-threaded application. That drives the CPU utilization higher allowing for a more efficient data center. The operational costs are not so linear as reducing 10K physical servers to 5K still requires management of 10K instances, but the upside is some of the energy savings but no where compares to the cost of a data center most enterprises are growing out of....
gb>>Large scale enterprises are already spanning geographies at layer 2, with the adoption of Data Center Bridging and L2MP we will eliminate spanning tree and scale to larger L2 domains. Other technologies like TRILL are on the way to the data center as well. We don’t need to hang on to the MAC but my point about the overhead was in runtime as you point out, during a VM movement state does have to be migrated and instructions queued, for the benefit of running less than Nx2 for redundancy.
I'm wondering if at some point "the cloud" bifurcates into a space optimized for serving legacy stacks, and another optimized for more modern designs? If the latter, I'm wondering if the dominant abstraction is lower-level (think context switching, cache-line management, etc), or higher-level (think whatever griddy APIs you prefer).
Or maybe, if the vibe I'm getting here is on-target, advances in the lower and higher-level approaches will have to be complementary. ie building an arbitrarily complex software stack is "free" as long as it can a) then be stamped out at massive scale, and b) address the levels of granularity in workload management, and ease-of-consumption issues which hardware advances alone cannot.
gb>> you are seeing the evolution now, The “cloud” is multi-dimensional some view as having six-layers... I like to just think of three, Infrastructure, Platform, Service...
I certainly think that hardware in cloud data centers will change, for
example:
1. If reliability is being handled outside the hardware, then things
like redundant PSUs may not make sense, they suck power and do little
for the overall availability of the cloud.
2. If storage is consolidated, then why have space for 6-8 drives in
each server
3. External I/O virtualization in the form of MR-IOV expansion boxes
means you don't need to put room for lots of PCI slots in the
application servers
> Virtualization (assumed to mean os virtualization) clearly can ease
> operational cost for a legacy application stack, but it is a pretty
> blunt instrument to apply to things like workload management - you're
> essentially maximizing the size of your movable state, introducing an
> incredibly coarse-grained locking infrastructure, and adding
> considerable management complexity (eg blowing out your spanning trees
> just to preserve the mac you had, because the os doesn't expect that to
> change in a single tick) in exchange for preserving your current
> architecture. That's more than just a 5% runtime penalty.
Yeah, but its dirt cheap to do and well understood and sometimes that's
enough ;-)
>
> Krishna, as I'm totally ignorant on multi-core designs - is the primary
> power-saving benefit purely a packaging issue, eg sharing a power line
> amongst the cores vs adding a whole additional "card", or is it a more
> integrated thing where there are dynamic runtime benefits derived from
> shared componentry? ie if I have an 8-core cpu in a huge datacenter
> buildout, is it equivalent to expose that thing as if it were 8 separate
> "computers" vs one 8-cpu "computer"?
>
>
Multi-core is about two things:
1. GHz walls means that we can't simply make the CPU faster every year
by bumping the clock rate. Using multiple cores in a single CPU lets you
get some benefit from naturally parallel/multi-threaded bits of code
2. Putting multiple cores in a single CPU means that you can pack an
awful lot of processing cores in a single box which is attractive for
virtualization.
That's one of the additional benefits of virtualization. Without
virtualization I've got one OS hogging all the processors and probably
not making very effective use of them regardless of whether they are
eight discrete chips or a single chip with 8 cores.
--
Nik Simpson
>
> If the goals are to have smarter software and maximize utilization (or
> minimize power consumption for equivalent compute capacity), then why
> introduce the constant runtime overhead of virtualization instead of,
> eg using smaller more power-efficient compute-unit designs and making
> the hardware controllable by software?
What you say here 'smaller units controllable by software' is what is
the purpose of OS kernel, as you know, but commodity boxes do not
provide that many such 'smaller units'. That is the problem Azul
(http://www.azulsystems.com) solves by packing in excess of 760 compute
units (and equal number of GB) in one box and have its specialized
software (OS) control allocation of those units on demand and by policy.
The only limitation there is that, it only knows how to execute Java.
You can have one JVM take all the cores in the box (when you have a
highly parallelized application) or several VMs dynamically share those
cores and memory subject to policy constraints. So, in your
terminology, an Azul domain with one or more Azul boxes becomes a
'girddized' cloud with a special purpose. Obviously, this won't serve
well for single threaded applications.
btw, this arrangement also gives the opportunity to have a huge heap for
one VM with practically no GC pause!
rgds,
S.R.
Now I have to chime in ;o) (Dan, see what you got me into !)
I did not say elasticity cannot be achieved by virtualization (notice the double negative ;o)); I also didn’t say for elasticity, virtualization is needed ! My point was, they are orthogonal – elasticity can be achieved with or without virtualization. In other words, virtualization is neither a necessity nor a prerequisite for elasticity.
Cisco has a product called VFrame which does provisioning and in old days (may be even now) it turns on and off real hardware to get additional capacity. (This is just an example – I am neither advocating for nor against VFrame, in this context) You can have a perfectly good cloud in that way.
Having said that, the hypervisors give us couple of things; first - the ability to move VMs around; I am not sure if it also gives us VM expandability as that requires a bit more than just a VM change (applications need to reallocate buffers, move boundaries et al); second - it offers us more granularity and efficient use of hardware. I am not aware of any technology to pickle a running machine and resurrect it somewhere else as easily as the VM motion technologies.
Going back to Dan’s original question, for an application - say a hadoop cluster, would virtualization give any advantage ? Same goes with Utpal’s examples of grids and clusters.
Let me stop before I get into more trouble ;o)
Cheers
<k/>
BTW, the logic which says because Amazon is based on XEN, virtualization is required for elasticity is not logical. Amazon also sells books and that doesn’t mean for elasticity one should sell books ;o) But the second order effects of having an infrastructure which sells lots of books do help (engineers who understand scalability, idle resources when books are not in high demand, a retail mindset and so forth)
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Utpal Datta
Sent: Wednesday, December 17, 2008 6:46 PM
To: cloud-c...@googlegroups.com
commodity boxes do not
provide that many such 'smaller units'. That is the problem Azul
(http://www.azulsystems.com) solves by packing in excess of 760 compute
units (and equal number of GB) in one box and have its specialized
software (OS) control allocation of those units on demand and by policy.