SIGGRAPH tends to be an interesting mix of announcements and demonstrations. Major vendors like NVIDIA like to make their announcements at their own trade shows – or at the very least at a more press-focused show – but amidst the real-world demonstrations you’ll find a few new things that aren’t quite being announced but are being previewed for the very first time. In this case NVIDIA is using SIGGRAPH to present two new technologies to its workstation user base: Project Maximus, and Quadro Virtual Graphics Technology (AKA Monterey Technology). While we aren’t attending SIGGRAPH this year, we did have an opportunity to talk with NVIDIA and discuss these new technologies.

The first technology NVIDIA is previewing to the SIGGRAPH audience is called Project Maximus. Fundamentally Maximus is about using Quadro and Tesla cards together in a workstation in order to play off their respective strengths of graphics and compute. Rendering software such as 3ds Max can exploit both aspects of a GPU, with modeling taking place in the graphics domain while animation and final rendering (i.e. ray-tracing) take place in the compute domain. In certain configurations all of this can be done on the GPU, with the whens & wheres making up the purpose of Maximus.


Image Courtesy Engadget

Ray-tracing rendering is typically an “offline” process – it can take hours to render a scene, making real-time manipulation of a scene difficult. Ray-tracing renderers that run on the GPU such as NVIDIA's OptiX can complete this much faster, and while it’s still not able to completely render a scene in real-time, it’s fast enough to get a decent idea of what the final scene  will look like since only a small number of rays are needed to provide a rough-quality rendering in real-time. We’ve seen this specific scenario in action during the launch of the GTX 480 with NVIDIA’s Design Garage demo, which allows for real-time manipulation of a car using this model + ray-trace mechanism.

The issue with Design Garage, as is with 3ds Max, is that ray-tracing is pokey and bogs down the GPU. The GPU needs to spend most of its time ray-tracing, and as a result the interface is running at only a few frames per second. Even with the relatively fast context switching of the Fermi generation you can’t escape a simple fact: two resource intensive tasks run better in parallel on two processors than they do context switching on one. It’s the same reason that virtually all CPUs are multicore these days. As a result if you split up the task so that one GPU handles all the graphical/modeling tasks while the second GPU handles the ray-tracing/compute tasks, you would be able to manipulate an object in real time at a fluid interface framerate while ray-tracing the result at full speed.

So where does Maximus fit into this? By making the setup more economical. The obvious implementation of a multi-GPU workstation is to double up on Quadro cards. High-end Quadro cards are just as compute capable as Tesla cards – the Tesla C2070 is clocked exactly the same as a Quadro 6000 – but a Quadro 6000 is over $1000 more expensive than a Tesla C2070 on the open market. Since the ray-tracing task is entirely a compute task there’s no need for the second card to be a more expensive Quadro card when it could be a cheaper Tesla card, and that’s Maximus in a nutshell: using a Tesla card as a dedicated compute GPU to assist a Quadro card. It’s not necessarily groundbreaking, but for NVIDIA’s customers it would be a cheaper way to do real-time modeling and ray-tracing together.

The preview aspect of this technology is that while the fundamentals are already in place, workstation setups are a complex mixture of applications, drivers, and hardware, all of which is typically certified together in a single configuration. 3ds Max and other applications need to add support for using a Tesla card in this manner, and then NVIDIA needs to certify a driver set that will work with Tesla and Quadro, after which a system vendor will certify an entire system. As a result Maximus is still a work in progress, and won’t be “finished” until the software is ready and whole systems are certified.

Moving on, the other technology NVIDIA is previewing at SIGGRAPH is Quadro Virtual Graphics Technology, also known as Monterey Technology. Though I disapprove of the vast abuse of the term “cloud” in this industry, NVIDIA’s tagline is the most meaningful single-sentence description you’ll get: it’s Quadro from the cloud. Monterey is essentially the means to move Quadro from the desktop to the server room by having a client interact with a Quadro card over a network and doing all rendering remotely.


Image Courtesy Engadget

To be more specific, NVIDIA will be adding hooks to their driver to abstract the location of the GPU so that a program can talk to a GPU in a remote location while behaving like it’s local. While NVIDIA did not go into how clients will pass commands to the server, the return path will be an H.264 stream (encoded by the GPU of course), which will then be decoded by the client’s GPU and displayed. If this seems familiar to you it should – in practice it’s rather close to Microsoft’s RemoteFX technology. NVIDIA hasn’t explicitly stated what makes Quadro Virtual Graphics Technology different from RemoteFX, but the most obvious conclusion is that while RemoteFX had limited 3D rendering support (DX9-only), virtual graphics will at a minimum support OpenGL given the prevalence of OpenGL in the types of workstation applications paired with a Quadro card.

NVIDIA’s aspirations with the technology are fairly lofty as it’s an ecosystem product that ties together multiple products. Quadros would be server-side, while clients can be lower-powered Quadros (e.g. laptops) or even mobile Tegra-based products – both of which provide for the decoding of the H.264. The ultimate result would be that users could access the rendering power of Quadro cards remotely, from computers and mobile devices alike (ed: it’s the mainframe era all over again). Presumably NVIDIA has a use case in mind on the mobile side, as we’ve yet to see workstation-type software on a tablet or phone. The more immediate benefit would be the centralization of Quadro cards, allowing businesses to operate power-hungry Quadro cards in the controlled environment of a server room instead of menacing desktop users, and to establish a common pool of Quadro cards for a group of users rather than buying a Quadro card for each individual user.

As with Project Maximus, Quadro Virtual Graphics Technology is still in the development stage. NVIDIA hasn’t announced a specific timeline for when they expect to have it ready for customer use, but it sounds like we may see a shipping version before the year is out.

Comments Locked

11 Comments

View All Comments

  • DesktopMan - Friday, August 12, 2011 - link

    If Anandtech is attending be sure to pester them about Synergy. Haven't heard much since the announcement in April. Synergy would be very useful for laptops with external desktop GPUs attached.
  • yzkbug - Friday, August 12, 2011 - link

    Ditto!!!
  • sully213 - Friday, August 12, 2011 - link

    From the 1st paragraph....

    "While we aren’t attending SIGGRAPH this year,"

    Reading fail!
  • Ryan Smith - Friday, August 12, 2011 - link

    Keep in mind that Synergy doesn't officially exist. Everything we "know" about it is secondhand rumors. NVIDIA will have nothing to say until they have something to announce.
  • icrf - Friday, August 12, 2011 - link

    How does Monterey deal with the extreme paucity of good bandwidth and latency that helps makes GPUs so quick?
  • Ryan Smith - Friday, August 12, 2011 - link

    That's one of the details we don't have at this time. NVIDIA will always have their "secret sauce", but with a preview they're going to be particularly restrained.
  • icrf - Friday, August 12, 2011 - link

    Any idea how RemoteFX handles it?

    I am also making an assumption: the applications are installed on the clients and the rendering is just done on the server with the output returned to the clients. If that's not right and at least some critical parts of the applications are installed on the server, then this is just glorified remote desktop which is much less impressive.
  • mapesdhs - Sunday, August 21, 2011 - link


    Ryan Smith writes:
    > That's one of the details we don't have at this time. NVIDIA will
    > always have their "secret sauce", but with a preview they're going to
    > be particularly restrained.

    I tested a system of this kind 10 years ago (SGI's VizServer) while I
    was head admin at a VR research centre (NICVE @ Salford Uni, UK),
    though it also allowed one's local system to exploit the remote
    system's better CPU power aswell. Back then the big machine was a
    16-CPU 5-pipe Onyx2 - long since upgraded/replaced of course.

    We found it was most useful when the task in question was such that one
    wouldn't normally expect real-time update rates even on a high-end
    desktop, eg. very large complex models for CAD, medical, GIS, etc. The
    real benefit was being able to have access to high-end compute and gfx
    power of a remote system even when using a device as simple as a PDA or
    cheap laptop. Thus, for example, a researcher in the field using a low-
    end device could run tasks that exploit the power of a supercomputer
    many miles away.

    If NVIDIA expects to offer users real-time interactivity, I think
    customers may be disappointed, but I would enjoy being surprised if
    they succeed.

    Ian.
  • etamin - Friday, August 12, 2011 - link

    Any guesses as to which companies may host these Quadro cloud servers? Amazon? Rackspace? Equinix? I would think it's unlikely that cloud storage companies would jump on Monterey.
  • Conficio - Friday, August 12, 2011 - link

    While they name it "Cloud" it does not mean public cloud neccessarily. I'd rather thing private cloud, liek server room in the same building.

Log in

Don't have an account? Sign up now