Last modified: January 23, 2026

This article is written in: 🇺🇸

Filters and Algorithms

VTK’s filters and algorithms allow you to convert your data from “a static dataset” to a dynamic pipeline: you generate something, clean it up, extract meaning, and reshape it into a form that’s easier to analyze or visualize. Think of it like a workshop line: raw material comes in, tools operate on it, and what comes out is clearer, lighter, or more informative.

In real VTK projects you rarely render raw input directly. You almost always process it first: remove noise, compute derived quantities, simplify meshes, extract surfaces, reorganize time steps, or convert data into a representation that downstream steps can handle. Filters are the verbs of VTK. They’re how you do things to data.

One of the major components of VTK is its extensive range of filters and algorithms, designed to process, manipulate, and generate data objects. The important part is not memorizing names, but recognizing what kind of change a filter is making and what assumptions it relies on.

Purpose and functionality

Filters and algorithms are primarily used for:

A practical habit is to ask: is this step changing geometry, topology, attributes, or time?

That single question helps you predict what will break later, or why something that “should be the same shape” suddenly behaves differently.

Interaction with data connectivity

A significant part of many operations involves altering or depending on the dataset’s connectivity.

Connectivity refers to relationships inside the data structure: which points belong to which cells, how cells touch, and what “togetherness” means for the dataset. It’s the difference between “points in space” and “a surface,” “a volume,” or “a meaningful region.”

Two datasets can have the same coordinates and still behave differently if connectivity differs. That’s why filters can feel so powerful: they’re not just moving numbers around, they’re preserving or rewriting the relationships that make the data interpretable.

A practical way to phrase it:

Understanding connectivity

Connectivity is one of those ideas that seems obvious right until it bites you. If you’ve ever smoothed a mesh and wondered why sharp features disappeared, triangulated polygons and got unexpected results, or extracted a region and got too many pieces, you’ve already seen connectivity in action.

In vtkPolyData, this is especially important because cells (polygons/triangles/lines) don’t store coordinates directly. They store point ids into a shared Points array. That means there’s a crucial difference between:

Only the second one is “connected” in the topological sense.

Examples:

  1. Individual data points without any connectivity:
*    *    *    *
  1. Points connected in a simple linear fashion:
*---*---*---*
  1. Points connected to form a polygon:
polygon:
    *---*
   /     \
  *       *
   \     /
    *---*

And here’s the same idea visually (points + edges + faces as “glue”):

connectivity

A common real-world pitfall: “looks connected” vs “is connected”

Many meshes coming from certain file formats or pipelines duplicate vertices per face (each polygon has its own private set of points). Visually it can still look like a single surface because the duplicated points have identical coordinates. But topologically it’s a set of separate faces.

This shows up immediately with filters that act per-cell (or per-face), especially shrink-like operations. You shrink each polygon toward its own center and suddenly the mesh “explodes” into detached tiles. The filter isn’t being random; it’s revealing the true connectivity.

Data flow

VTK’s pipeline is intentionally predictable: sources produce data, filters transform it, and outputs feed into the next step. That predictability is what makes large visualization workflows manageable. You can swap filters in and out without rewriting everything because the interfaces and execution model are consistent.

It also makes debugging more systematic. When something looks wrong, the best approach is usually to inspect the pipeline stage-by-stage and find where geometry, topology, or attributes changed in a way you didn’t expect.

The flow typically looks like this:

Input(s)          Filter         Output(s)
+-------------+   +-----------+   +-------------+
| vtkDataSet  |-->| vtkFilter |-->| vtkDataSet  |
+-------------+   +-----------+   +-------------+

A small, practical note: when you’re not inside an active render loop, Update() is what forces a pipeline stage to execute. When you’re “just testing,” it’s very easy to forget Update() and accidentally inspect stale data.

vtkAlgorithm

vtkAlgorithm is the standard interface that makes the pipeline work. If something is a vtkAlgorithm, it plays nicely in VTK: it has input ports, output ports, and can be connected into pipelines without special casing.

A practical way to think about it:

If you’re building VTK programs in C++ or Python, you’ll constantly see this pattern:

Subclasses and roles

The base class and its conventions are what let you create long, readable pipelines instead of one-off transformations.

Sources

Sources are where your story begins: they generate data (procedural geometry) or read it from files. The reason sources matter isn’t just that they provide input; they define the initial structure, including connectivity, and that structure influences everything downstream.

Examples of sources include:

For instance, vtkSphereSource produces vtkPolyData whose faces share points along edges. That means it’s already a welded surface. Many filters behave “cleanly” on it because the topology is consistent.

Geometric filters

Geometric filters are the “shape editors.” They change point coordinates (move, rotate, smooth, shrink, warp) while often preserving the existing connectivity graph.

This is the category you reach for when you want the same object but with different geometry: less noise, a different scale, a smoother surface, or a transformed pose.

Examples include:

Even when connectivity is preserved, geometry changes still affect derived quantities such as normals, curvature, and measurements. That’s why geometric filters aren’t merely cosmetic.

Topological filters

Topological filters are the “rewire the structure” tools. Instead of moving points, they change how points are connected, or generate entirely new cells from existing data.

This matters because topology changes are a different kind of decision: you’re producing a derived representation. That’s often exactly what you want (triangles for rendering, contours for analysis), but it means downstream steps will see a new connectivity structure.

Examples include:

A normal sign that a topological filter did its job is that counts change: number of points, number of cells, cell types, or the number of connected components.

Scalars and attribute filters in VTK

Attribute filters make your data “smarter.” They add meaning by computing new arrays: gradients, curvature, magnitudes, and other derived quantities. This is where visualization turns into analysis: you’re not just looking at shape, you’re looking at computed properties of the shape.

Examples include:

One detail that comes up constantly: attributes can live on points or on cells.

If a filter outputs cell data but your mapper expects point data (or vice versa), the visualization can look wrong even though the numbers are fine.

Temporal filters in VTK

Temporal filters exist because time series data has its own problems: timesteps may have changing values, changing attributes, and sometimes changing geometry. Temporal filters help interpolate, normalize, or compute statistics across time without reinventing that logic manually.

Examples include:

Why some meshes detach and others stay connected?

Shrink filters are a perfect example of why connectivity matters.

vtkShrinkPolyData / vtkShrinkFilter conceptually shrink each cell toward its own center. If the input mesh is welded (adjacent faces share point ids), the output often still looks like a connected surface—because shared points enforce shared motion at boundaries.

If each polygon has its own unique points (duplicated vertices per face), then each polygon shrinks independently and gaps appear immediately. This is usually the explanation when people see “detached polygons” and wonder why a demo example stays connected.

That leads to three practical approaches depending on intent:

  1. If the mesh should be a single surface: merge coincident points first A cleaning step (commonly vtkCleanPolyData) can weld duplicated points (within a tolerance), restoring connectivity.

  2. If you want to shrink the entire object uniformly: don’t use per-cell shrink Compute a centroid and scale the entire mesh about that centroid (e.g., vtkTransform + vtkTransformPolyDataFilter). This preserves connectivity because you move shared points once, not cell-by-cell.

  3. If you truly want an “exploded” look: per-cell shrink is doing exactly that In that case detaching is not a bug; it’s the visual effect of independent cells.

A good mental shortcut: if the shrink filter reveals gaps, it’s usually exposing that your mesh wasn’t welded in the first place.

Example: Creating a Sphere Source and Applying a Shrink Filter

Examples like this are useful because they show the pipeline idea in miniature: a source generates data with connectivity, then a filter modifies geometry while keeping connectivity intact. Once you internalize that pattern, most VTK workflows become variations on the same theme.

In this example, we create a sphere and apply a shrink filter:

import vtk

# Create a sphere source
sphere_source = vtk.vtkSphereSource()
sphere_source.SetRadius(1.0)

# The sphere source generates points that are connected to form triangles,
# creating a spherical surface.

# Create a shrink filter
shrink_filter = vtk.vtkShrinkFilter()
shrink_filter.SetInputConnection(sphere_source.GetOutputPort())
shrink_filter.SetShrinkFactor(0.8)

# The shrink filter changes the positions of the points, making the sphere smaller,
# but the connectivity (how the points are connected to form triangles) remains the same.

# Update the filter to generate the output
shrink_filter.Update()

We start by creating a vtkSphereSource object to generate a sphere with a radius of 1.0 units, which produces points connected to form a spherical surface. Then we apply a vtkShrinkFilter, connected to the sphere source’s output, and set a shrink factor of 0.8. Finally, we call Update() to force execution and obtain the shrunken output.

Below is a visual representation of the shrunken sphere:

sphere_shrink

A practical “connected shrink” alternative (C++)

If your goal is “make the whole mesh smaller toward its center” without breaking it into detached faces, a global transform is usually the cleanest solution. Unlike vtkShrinkPolyData / vtkShrinkFilter (which act per-cell), a transform acts on points. If your mesh is connected (adjacent cells share point ids), moving points with a single global transform keeps the surface connected automatically.

The idea is simple:

That’s it. You’re scaling the entire object as one piece.

Option A: Scale about the centroid (good “physical” center)

The centroid here is computed from the mesh’s points (optionally weighted). This often feels like “shrink toward the mass center” and is a good default for irregular shapes.

#include <vtkSmartPointer.h>
#include <vtkPolyData.h>
#include <vtkCenterOfMass.h>
#include <vtkTransform.h>
#include <vtkTransformPolyDataFilter.h>

// Scales a vtkPolyData uniformly about its centroid.
// scale < 1.0 shrinks, scale > 1.0 grows.
vtkSmartPointer<vtkPolyData> ScalePolyDataAboutCentroid(vtkPolyData* input, double scale)
{
    if (!input)
        return nullptr;

    // 1) Compute centroid (center of mass) from points.
    auto com = vtkSmartPointer<vtkCenterOfMass>::New();
    com->SetInputData(input);
    com->SetUseScalarsAsWeights(false); // set true if you want scalar-weighted centroid
    com->Update();

    double c[3];
    com->GetCenter(c);

    // 2) Build transform: T(c) * S(scale) * T(-c)
    auto transform = vtkSmartPointer<vtkTransform>::New();
    transform->PostMultiply();                 // apply in the order we add them
    transform->Translate(c[0], c[1], c[2]);
    transform->Scale(scale, scale, scale);
    transform->Translate(-c[0], -c[1], -c[2]);

    // 3) Apply transform to the polydata
    auto tfilter = vtkSmartPointer<vtkTransformPolyDataFilter>::New();
    tfilter->SetInputData(input);
    tfilter->SetTransform(transform);
    tfilter->Update();

    return tfilter->GetOutput();
}

What this does

When centroid is a good choice

Option B: Scale about the bounding-box center (fast, predictable)

Sometimes you want the center of the mesh’s bounds (midpoint of min/max in x/y/z). This is simpler and doesn’t require computing a centroid, and it matches what many people expect as a “visual center” for symmetric objects.

#include <vtkSmartPointer.h>
#include <vtkPolyData.h>
#include <vtkPoints.h>
#include <vtkTransform.h>
#include <vtkTransformPolyDataFilter.h>

vtkSmartPointer<vtkPolyData> ScalePolyDataAboutBoundsCenter(vtkPolyData* input, double scale)
{
    if (!input)
        return nullptr;

    double bounds[6];
    input->GetBounds(bounds);

    const double cx = 0.5 * (bounds[0] + bounds[1]);
    const double cy = 0.5 * (bounds[2] + bounds[3]);
    const double cz = 0.5 * (bounds[4] + bounds[5]);

    auto transform = vtkSmartPointer<vtkTransform>::New();
    transform->PostMultiply();
    transform->Translate(cx, cy, cz);
    transform->Scale(scale, scale, scale);
    transform->Translate(-cx, -cy, -cz);

    auto tfilter = vtkSmartPointer<vtkTransformPolyDataFilter>::New();
    tfilter->SetInputData(input);
    tfilter->SetTransform(transform);
    tfilter->Update();

    return tfilter->GetOutput();
}

When bounds center is a good choice

Why this preserves connectivity (and shrink filters may not)?

A vtkPolyData surface stays connected when adjacent polygons share the same point ids. A global transform modifies point coordinates in the Points array, but it does not change which points each cell references. So adjacency remains intact.

By contrast, cell-based shrink filters conceptually pull each cell toward its own center. If your mesh isn’t welded (each polygon has its own duplicate vertices), cells don’t share points, so they shrink independently and gaps appear.

This transform approach is a good “first choice” when the mental model is “shrink the object” rather than “shrink each face.”

Summary

This is a quick reference, but the more important takeaway is what each category means:

Category Class Name Description
Sources vtkSphereSource Generates spherical polydata.
vtkConeSource Creates conical polydata.
vtkSTLReader Reads STL files.
vtkXMLPolyDataReader Reads VTK’s XML polydata files.
Geometric Filters vtkShrinkFilter Shrinks cells inward (appearance depends on connectivity).
vtkSmoothPolyDataFilter Smooths polydata by adjusting point positions.
vtkDecimatePro Reduces triangle count (often affects topology).
Topological Filters vtkTriangleFilter Converts polygons to triangles.
vtkDelaunay2D Constructs a 2D Delaunay triangulation.
vtkContourFilter Generates contours/isosurfaces from scalar fields.
Scalars & Attribute Filters vtkGradientFilter Computes gradient of a scalar field (adds vector attribute).
vtkVectorNorm Computes magnitude of vector data (adds scalar attribute).
vtkCurvatures Computes curvature measures (adds scalar attributes).
Temporal Filters vtkTemporalInterpolator Interpolates data between time steps.
vtkTemporalShiftScale Shifts and scales time values.
vtkTemporalStatistics Computes statistics over time (adds attributes).
Other vtkAlgorithm Base pipeline interface for algorithms (ports + execution model).