Abstract
Pure geometric techniques have emerged as viable real-time alternatives to those traditionally used for rendering crowds. However, although capable of drawing many thousands of individually animated characters, the potential for injecting intra-crowd diversity within this framework remains to be fully explored. For urban crowds, a prominent source of diversity is that of clothing and this work presents a technique to render a crowd of clothed, virtual humans whilst minimising redundant vertex processing, overdraw and memory consumption. By adopting a piecewise representation, given an assigned outfit and pre-computed visibility metadata, characters can be constructed dynamically from a set of sub-meshes and rendered using skinned instancing. Using this technique, a geometric crowd of 1,000 independently clothed, animated and textured characters can be rendered at 40 fps.
Original language | English |
---|---|
Publication status | Published - 2009 |
Event | 17th International Conference on Computer Graphics, Visualization and Computer Vision - Pilsen, Czech Republic Duration: 2 Feb 2009 → 5 Feb 2009 |
Conference
Conference | 17th International Conference on Computer Graphics, Visualization and Computer Vision |
---|---|
Abbreviated title | WSCG 2009 |
Country/Territory | Czech Republic |
City | Pilsen |
Period | 2/02/09 → 5/02/09 |