Character sketches, drawings, doodles live or digital.
Maybe a traditional practice but I find drawing a timesaver. As a tool to explore differing looks the sketch process usually pays for itself many times over. Drawings can also be used to get people excited so they have a PR function as well. Not bad for something that doesn’t move. I started drawing professionally as an industrial designer and later practised commercial Ad illustration.
Generation of a visual design for an item, character, or area that does not yet exist. This includes, but is not limited to, film production, animation production and video game production. Concept art may be required for nothing more than preliminary artwork, or it may be required as part of a process until a project reaches fruition. A concept artist must also be able to work to strict deadlines in the capacity of a Graphic designer.
Concept art ranges from photorealistic to traditional painting techniques. This is facilitated by the use of special software by which an artist is able to fill in even small details pixel by pixel, or utilise the natural paint settings to imitate real paint. When commissioning work, a company will often require a large amount of preliminary work to be produced. Artists working on a project often produce a large turnover in the early stages to provide a broad range of interpretations, most of this being in the form of sketches, speed paints, and 3d overpaints. Later pieces of concept art, like matte paintings, are produced as realistically as required.
Poly, box, sculpting in pixols or voxols.
Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons.
I don’t practice Curve modeling anymore but- Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. Curve types include Nonuniform rational B-spline (NURBS), Splines, Patches and geometric primitives
Digital sculpting – Still fairly new method of modeling 3D sculpting has become very popular in the few short years it has been around. There are 3 types of this currently: Displacement, which is the most widely used among applications at this moment, volumetric and dynamic tessellation. Displacement uses a dense model (often generated by Subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tesselation Is similar to Voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine.
Ordering UV information allowing low distortion texturing in a variety of formats. Texture extraction and baking. Production of normal, bump and displacement maps as well as diffuse occlusion and specular.
A texture map is applied (mapped) to the surface of a shape or polygon. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.
In 3D computer graphics, normal mapping, or “Dot3 bump mapping”, is a technique used for faking the lighting of bumps and dents. It is used to add details without using more polygons. A normal map is usually an RGB image that corresponds to the X, Y, and Z coordinates of a surface normal from a more detailed version of the object. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model.
Skeletal animation is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive and the same technique can be used to control the deformation of any object — a spoon, a building, or a galaxy.
This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other “goal-oriented” techniques.
This technique is used by constructing a series of ‘bones,’ is referred to as rigging. Each bone has a three dimensional transformation (which includes its position, scale and orientation), and an optional parent bone. The bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own transform. So moving a thigh-bone will move the lower leg too. As the character is animated, the bones change their transformation over time, under the influence of some animation controller. A rig is generally composed of both forward kinematics and inverse kinematics parts that may interact with each other. Skeletal animation is referring to the forward kinematics part of the rig, where a complete set of bones configurations identifies a unique pose.
Each bone in the skeleton is associated with some portion of the character’s visual representation. Skinning is the process of creating this association. In the most common case of a polygonal mesh character, the bone is associated with a group of vertices; for example, in a model of a human being, the ‘thigh’ bone would be associated with the vertices making up the polygons in the model’s thigh. Portions of the character’s skin can normally be associated with multiple bones, each one having a scaling factors called vertex weights, or blend weights. The movement of skin near the joints of two bones, can therefore be influenced by both bones. In most state-of-the-art graphical engines, the skining process is done on the GPU thanks to a shader program.
For a polygonal mesh, each vertex can have a blend weight for each bone. To calculate the final position of the vertex, each bone transformation is applied to the vertex position, scaled by its corresponding weight. This algorithm is called matrix palette skinning, because the set of bone transformations (stored as transform matrices) form a palette for the skin vertex to choose from.
Character animation is a specialized area of the animation process concerning the animation of one or more characters featured in an animated work. It is usually as one aspect of a larger production and often made to enhance voice acting. The primary role of a Character Animator is to be the “actor” behind the performance, especially during shots with no dialog. Character animation is artistically unique from other animation in that it involves the creation of apparent thought and emotion in addition to physical action.
Though typical examples of character animation are found in animated feature films, the role of character animation within the gaming industry is rapidly increasing. Game developers are using more complicated characters that allow the gamer to more fully connect with the gaming experience. Prince of Persia, God of War, Team Fortress II or Resident Evil contain examples of character animation in games.
Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement on to a digital model. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision and robotics. In filmmaking it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.
In motion capture sessions, movements of one or more actors are sampled many times per second, although with most techniques (recent developments from Weta use images for 2D motion capture and project into 3D), motion capture records only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This is comparable to the older technique of rotoscope, such as the 1978 The Lord of the Rings animated film where the visual appearance of the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character.
Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor’s performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.
Game engines are becoming available without initial cost aiding in the process of development. Only at the point of publishing a game is it necessary to purchase a license based on a distribution model. Most game engine suites provide facilities that ease development, such as graphics, sound, physics and AI functions. These game engines are sometimes called “middleware” because, as with the business sense of the term, they provide a flexible and reusable software platform which provides all the core functionality needed, right out of the box, to develop a game application while reducing costs, complexities, and time-to-market—all critical factors in the highly competitive video game industry. Gamebryo and RenderWare are such widely used middleware programs as well as the links in the sidebar here.
Like other middleware solutions, game engines usually provide platform abstraction, allowing the same game to be run on various platforms including game consoles and personal computers with few, if any, changes made to the game source code. Often, game engines are designed with a component-based architecture that allows specific systems in the engine to be replaced or extended with more specialized (and often more expensive) game middleware components such as Havok for physics, Miles Sound System for sound, or Bink for Video. Some game engines such as RenderWare are even designed as a series of loosely connected game middleware components that can be selectively combined to create a custom engine, instead of the more common approach of extending or customizing a flexible integrated solution. However extensibility is achieved, it remains a high priority in games engines due to the wide variety of uses for which they are applied. Despite the specificity of the name, game engines are often used for other kinds of interactive applications with real-time graphical needs such as marketing demos, architectural visualizations, training simulations, and modeling environments.