rigid-body-motion¶
Python utilities for estimating and transforming rigid body motion.
Hosted on GitHub: https://github.com/phausamann/rigid-body-motion
Overview¶
This package provides a high-level interface for transforming arrays
describing motion of rigid bodies between different coordinate systems and
reference frames. The core of the reference frame handling is a fast
re-implementation of ROS’s tf2
library using numpy
and
numpy-quaternion
. The package also provides first-class support for
xarray data types.
Installation¶
Latest version¶
rigid-body-motion can be installed via pip
:
$ pip install rigid-body-motion
or via conda
:
$ conda install -c phausamann -c conda-forge rigid-body-motion
Optional dependencies¶
Optional dependencies can be installed via pip
or conda
(just replace
pip install
with conda install
).
Transformations can be sped up significantly by installing the numba library:
$ pip install numba
rigid-body-motion supports xarray data types:
$ pip install xarray
Loading the example datasets requires the pooch and netCDF4 libraries:
$ pip install pooch netcdf4
Plotting functions require matplotlib:
$ pip install matplotlib
ROS integration¶
The package also integrates with ROS, which can be installed quite
conveniently via conda
thanks to the amazing work by the people from
RoboStack. This even works on Windows and macOS! The necessary dependencies
are:
$ conda install -c conda-forge -c robostack ros-noetic-rospy ros-noetic-tf ros-noetic-tf2-ros ros-noetic-tf2-geometry-msgs ros-noetic-geometry-msgs ros-noetic-visualization-msgs python-orocos-kdl boost=1.74
Note that these packages are only available for Python 3.6 and 3.8.
In order to use RViz for visualization you need to install that as well:
$ conda install -c conda-forge -c robostack ros-noetic-rviz
Since ROS can communicate across conda environments, we would however recommend installing RViz in a dedicated environment:
$ conda create -n ros -c conda-forge -c robostack ros-noetic-rviz
After installing, you can spin up a roscore
:
$ conda activate ros
$ roscore
and, from a second terminal, launch RViz:
$ conda activate ros
$ rviz
Troubleshooting¶
In order to make sure all ROS dependencies have been set up correctly, run:
$ python -m rigid_body_motion.ros.check_install
If this is not successful, you need to fix some of your dependencies. The following outlines solutions for common issues:
Problem: boost version
ImportError: libboost_thread.so.1.74.0: cannot open shared object file: No such file or directory
Solution: install correct boost version
$ conda install -c conda-forge boost=1.74
Replace
1.74
with whatever version the traceback tells you is missing.
Example environments¶
We recommend using rigid_body_motion in a dedicated conda environment.
For the examples in the user guide, we provide an
environment file
that you can download and set up with:
$ conda env create -f example-env.yml
There’s also an example environment that includes the necessary ROS packages
(here
) that you can set up with:
$ conda env create -f example-env-ros.yml
If you download the examples as Jupyter notebooks, it is sufficient to have
the Jupyter notebook server installed in your base environment alongside
nb_conda
:
$ conda install -n base nb_conda
Now you can open any of the example notebooks, go to Kernel > Change kernel and select Python [conda env:rbm-examples] (or Python [conda env:rbm-examples-ros]).
Pre-release version¶
The latest pre-release can be installed from the GitHub master branch:
$ pip install git+https://github.com/phausamann/rigid-body-motion.git
You can download
this guide as a
Jupyter notebook.
Reference frames¶
rigid_body_motion provides a flexible high-performance framework for working offline with motion data. The core of this framework is a mechanism for constructing trees of both static and dynamic reference frames that supports automatic lookup and application of transformations across the tree.
Note
The following examples require the matplotlib
library.
[1]:
import numpy as np
import rigid_body_motion as rbm
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (6, 6)
Static frames¶
We will begin by defining a world reference frame using the ReferenceFrame class:
[2]:
rf_world = rbm.ReferenceFrame("world")
Now we can add a second reference frame as a child of the world frame. This frame is translated by 5 meters in the x direction and rotated 90° around the z axis. Note that rotations are represented as unit quaternions by default.
[3]:
rf_observer = rbm.ReferenceFrame(
"observer",
parent=rf_world,
translation=(5, 0, 0),
rotation=(np.sqrt(2) / 2, 0, 0, np.sqrt(2) / 2),
)
We can show the reference frame tree with the render_tree function:
[4]:
rbm.render_tree(rf_world)
world
└── observer
It is also possible to show a 3d plot of static reference frames with plot.reference_frame():
[5]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
fig.tight_layout()

To facilitate referring to previously defined frames, the library has a registry where frames can be stored by name with ReferenceFrame.register():
[6]:
rf_world.register()
rf_observer.register()
rbm.registry
[6]:
{'world': <ReferenceFrame 'world'>, 'observer': <ReferenceFrame 'observer'>}
Transformations¶
Now that we’ve set up a basic tree, we can use it to transform motion between reference frames. We use the lookup_transform() method to obtain the transformation from the world to the observer frame:
[7]:
t, r = rbm.lookup_transform(outof="world", into="observer")
This transformation consists of a translation \(t\):
[8]:
t
[8]:
array([0., 5., 0.])
and a rotation \(r\):
[9]:
r
[9]:
array([ 0.70710678, 0. , 0. , -0.70710678])
Position¶
rigid_body_motion uses the convention that a transformation is a rotation followed by a translation. Here, when applying the transformation to a point \(p\) expressed with respect to (wrt) the world frame \(W\) it yields the point wrt the observer frame \(O\):
The \(\operatorname{rot}()\) function denotes the rotation of a vector by a quaternion.
Let’s assume we have a rigid body located at 2 meters in the x direction from the origin of the world reference frame:
[10]:
p_body_world = np.array((2, 0, 0))
We can add the body position to the plot with plot.points():
[11]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
fig.tight_layout()

We can use the above formula to transform the position of the body into the observer frame. The rotate_vectors() method implements the rotation of a vector by a quaternion:
[12]:
p_body_observer = rbm.rotate_vectors(r, p_body_world) + t
p_body_observer
[12]:
array([0., 3., 0.])
As expected, the resulting position of the body is 3 meters from the observer in the y direction. For convenience, the transform_points() method performs all of the above steps:
Lookup of the frames by name in the registry (if applicable)
Computing the transformation from the source to the target frame
Applying the transformation to the point(s)
[13]:
p_body_observer = rbm.transform_points(p_body_world, outof="world", into="observer")
p_body_observer
[13]:
array([0., 3., 0.])
Orientation¶
Orientations expressed in quaternions are transformed by quaternion multiplication:
This multiplication is implemented in the qmul() function to which you can pass an arbtrary number of quaternions to multiply. Assuming the body is oriented in the same direction as the world frame, transforming the orientation into the observer frame results in a rotation around the yaw axis:
[14]:
o_body_world = np.array((1, 0, 0, 0))
rbm.qmul(r, o_body_world)
[14]:
array([ 0.70710678, 0. , 0. , -0.70710678])
We can add the orientation to the plot with plot.quaternions():
[15]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
rbm.plot.quaternions(o_body_world, base=p_body_world, ax=ax)
fig.tight_layout()

Again, for convenience, the transform_quaternions() function can be used in the same way as transform_points():
[16]:
rbm.transform_quaternions(o_body_world, outof="world", into="observer")
[16]:
array([ 0.70710678, 0. , 0. , -0.70710678])
Vectors¶
Let’s assume the body moves in the x direction with a velocity of 1 m/s:
[17]:
v_body_world = np.array((1, 0, 0))
We can add the velocity to the plot with plot.vectors():
[18]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
rbm.plot.vectors(v_body_world, base=p_body_world, ax=ax, color="y")
fig.tight_layout()

From the point of view of the observer, the body moves with the same speed, but in the negative y direction. Therefore, we need to apply a coordinate transformation to represent the velocity vector in the observer frame:
[19]:
rbm.rotate_vectors(r, v_body_world)
[19]:
array([ 0., -1., 0.])
Like before, the transform_vectors() function can be used in the same way as transform_points():
[20]:
rbm.transform_vectors(v_body_world, outof="world", into="observer")
[20]:
array([ 0., -1., 0.])
Moving frames¶
Now, let’s assume that the body moves from the origin of the world frame to the origin of the observer frame in 5 steps:
[21]:
p_body_world = np.zeros((5, 3))
p_body_world[:, 0] = np.linspace(0, 5, 5)
[22]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo-")
fig.tight_layout()

We will now attach a reference frame to the moving body to explain the handling of moving reference frames. For this, we need to associate the positions of the body with corresponding timestamps:
[23]:
ts_body = np.arange(5)
Let’s construct the moving body frame and add it to the registry. We will use the register_frame() convenience method:
[24]:
rbm.register_frame("body", translation=p_body_world, timestamps=ts_body, parent="world")
rbm.render_tree("world")
world
├── observer
└── body
If we transform a static point from the world into the body frame its position will change over time, which is why transform_points() will return an array of points even though we pass only a single point:
[25]:
rbm.transform_points((2, 0, 0), outof="world", into="body")
[25]:
array([[ 2. , 0. , 0. ],
[ 0.75, 0. , 0. ],
[-0.5 , 0. , 0. ],
[-1.75, 0. , 0. ],
[-3. , 0. , 0. ]])
One of the central features of the reference frame mechanism is its ability to consolidate arrays of timestamped motion even when the timestamps don’t match. To illustrate this, let’s create a second body moving in the y direction in world coordinates whose timestamps are offset by 0.5 seconds compared to the first body:
[26]:
p_body2_world = np.zeros((5, 3))
p_body2_world[:, 1] = np.linspace(0, 2, 5)
ts_body2 = ts_body - 0.5
[27]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo-")
rbm.plot.points(p_body2_world, ax=ax, fmt="co-")
fig.tight_layout()

Transforming the position of the second body into the frame of the first body still works, despite the timestamp mismatch:
[28]:
p_body2_body, ts_body2_body = rbm.transform_points(
p_body2_world,
outof="world",
into="body",
timestamps=ts_body2,
return_timestamps=True,
)
This is because behind the scenes, transform_points() matches the timestamps of the array to transform with those of the transformation across the tree by
computing the range of timestamps for which the transformation is defined,
intersecting that range with the range of timestamps to be transformed and
interpolating the resulting transformation across the tree to match the timestamps of the array.
Note that we specified return_timestamps=True
to obtain the timestamps of the transformed array as they are different from the original timestamps. Let’s plot the position of both bodies wrt the world frame as well as the position of the second body wrt the first body to see how the timestamp matching works:
[29]:
fig, axarr = plt.subplots(3, 1, sharex=True, sharey=True)
axarr[0].plot(ts_body, p_body_world, "*-")
axarr[0].set_ylabel("Position (m)")
axarr[0].set_title("First body wrt world frame")
axarr[0].grid("on")
axarr[1].plot(ts_body2, p_body2_world, "*-")
axarr[1].set_ylabel("Position (m)")
axarr[1].set_title("Second body wrt world frame")
axarr[1].grid("on")
axarr[2].plot(ts_body2_body, p_body2_body, "*-")
axarr[2].set_xlabel("Time (s)")
axarr[2].set_ylabel("Position (m)")
axarr[2].set_title("Second body wrt first body frame")
axarr[2].grid("on")
axarr[2].legend(["x", "y", "z"], loc="upper left")
fig.tight_layout()

As you can see, the resulting timestamps are the same as those of the second body; however, the first sample has been dropped because the transformation is not defined there.
You can download
this guide as a
Jupyter notebook.
Linear and angular velocity¶
We have seen how to look up and transform positions and orientations across reference frames in the previous section. Working with velocities adds some complexity that will be explained in this section.
Note
The following examples require the matplotlib
library.
[1]:
import numpy as np
import rigid_body_motion as rbm
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (6, 6)
Setting up¶
Like in the previous section, we first set up the world frame:
[2]:
rbm.register_frame("world")
For simplicity, all reference frames will share the same timestamps:
[3]:
n_timestamps = 5
ts = np.linspace(0, 5, n_timestamps)
We define a body moving 5 meters in the x direction:
[4]:
p_body_world = np.zeros((n_timestamps, 3))
p_body_world[:, 0] = np.linspace(0, 5, n_timestamps)
In addition, this body will rotate 90° around the z axis. We use helper functions from the quaternion
package for this:
[5]:
from quaternion import as_float_array, from_euler_angles
o_body_world = as_float_array(
from_euler_angles(np.linspace(0, np.pi / 4, n_timestamps), 0, 0)
)
Now we can attach a reference frame to this body:
[6]:
rbm.register_frame(
"body",
translation=p_body_world,
rotation=o_body_world,
timestamps=ts,
parent="world",
)
We now define a second moving body whose motion we describe wrt the frame of the first body. It is located at 1 meter in the y direction and moves 1 meter in the negative x direction.
[7]:
p_body2_body = np.zeros((n_timestamps, 3))
p_body2_body[:, 0] = -np.linspace(0, 1, n_timestamps)
p_body2_body[:, 1] = 1
This body also rotates, but this time around the y axis:
[8]:
o_body2_body = as_float_array(
from_euler_angles(0, np.linspace(0, np.pi / 4, n_timestamps), 0)
)
Now we can register a frame attached to the second body as a child frame of the first body frame:
[9]:
rbm.register_frame(
"body2",
translation=p_body2_body,
rotation=o_body2_body,
timestamps=ts,
parent="body",
)
Let’s plot the position and orientation of both bodies wrt the world frame. We use the lookup_pose() method to obtain the position of the second body in the world frame:
[10]:
p_body2_world, o_body2_world = rbm.lookup_pose("body2", "world")
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(121, projection="3d")
rbm.plot.points(p_body_world, ax=ax, fmt="yo-")
rbm.plot.quaternions(o_body_world, base=p_body_world, ax=ax)
ax = fig.add_subplot(122, projection="3d", sharex=ax, sharey=ax, sharez=ax)
rbm.plot.points(p_body2_world, ax=ax, fmt="co-")
rbm.plot.quaternions(o_body2_world, base=p_body2_world, ax=ax)
fig.tight_layout()

Linear velocity¶
The linear velocity of a frame wrt another frame can be calculated with the lookup_linear_velocity() method:
[11]:
v_body2_body = rbm.lookup_linear_velocity("body2", "body")
v_body2_body[0]
[11]:
array([-0.2, 0. , 0. ])
As expected, the velocity of the second body wrt the first body \(v_{B2/B}\) is 0.2 m/s in the negative x direction. Next, we will see how we can obtain the velocity of the second body wrt the world frame \(W\).
Transforming the reference frame¶
When transforming linear velocities across frames, we need to use the so-called “three term velocity formula”. In this case, we have the velocity of the moving frame \(B2\) wrt the reference frame \(B\). We can transform the current reference frame \(B\) to the new frame \(W\) as follows:
In addition to the velocities between the reference frames \(B\) and \(W\), this formula also requires the translation between the moving frame \(B2\) and the original reference frame \(B\). This is why we also need to specify the moving_frame
argument when using transform_linear_velocity():
[12]:
v_body2_world = rbm.transform_linear_velocity(
v_body2_body, outof="body", into="world", moving_frame="body2", timestamps=ts,
)
Alternatively, we can also use lookup_linear_velocity() to lookup the position of \(B2\) wrt \(W\) and differentiate:
[13]:
v_body2_world_lookup = rbm.lookup_linear_velocity("body2", "world")
The following short helper function can be used to compare the two methods:
[14]:
def compare_velocities(transform, lookup, timestamps=None, mode="linear"):
""" Compare velocities from transform and lookup. """
fig, axarr = plt.subplots(2, 1, sharex=True, sharey=True)
ylabel = f"{mode.capitalize()} velocity ({'rad/s' if mode == 'angular' else 'm/s'})"
axarr[0].plot(timestamps, transform)
axarr[0].set_ylabel(ylabel)
axarr[0].set_title("Transform")
axarr[1].plot(timestamps, lookup)
axarr[1].set_xlabel("Time (s)")
axarr[1].set_ylabel(ylabel)
axarr[1].set_title("Lookup")
axarr[1].legend(["x", "y", "z"])
fig.tight_layout()
[15]:
compare_velocities(v_body2_world, v_body2_world_lookup, ts)

We see a slight discrepancy due to numerical inconsistencies in the derivative calculation. However, these errors are reduced at higher sampling rates.
Transforming the moving frame¶
In a different scenario, we might be given the velocity of the first body wrt the world frame and want to transform the moving frame from \(B\) to \(B2\) to compute the velocity of the second body wrt \(W\). The same formula applies, although this time the input is \(v_{B/W}\):
When using transform_linear_velocity() we need to be careful that the velocity is represented in the coordinates of the frame we want to transform. Therefore, \(v_{B/W}\) has to be represented in \(B\):
[16]:
v_body_world = rbm.lookup_linear_velocity("body", "world", represent_in="body")
Now we can use transform_linear_velocity() with what="moving_frame"
to transform from \(B\) to \(B2\). Since the method needs to look up \(\omega_{B/W}\), this time we need to provide the reference_frame
of the velocity:
[17]:
v_body2_world = rbm.transform_linear_velocity(
v_body_world,
outof="body",
into="body2",
reference_frame="world",
what="moving_frame",
timestamps=ts,
)
Let’s compare this result against the lookup method again. Note that the transformation also changes the representation frame to the new frame, which is why the resulting velocity is different compared to the first case:
[18]:
v_body2_world_lookup = rbm.lookup_linear_velocity("body2", "world", represent_in="body2")
[19]:
compare_velocities(v_body2_world, v_body2_world_lookup, ts)

Angular velocity¶
Angular velocities can be looked up with lookup_angular_velocity():
[20]:
w_body2_body = rbm.lookup_angular_velocity("body2", "body")
w_body2_body[0]
[20]:
array([0. , 0.15708158, 0. ])
Transforming the reference frame¶
Transforming the reference frame of angular velocity is similar to the case of linear velocity, although the formula is a lot simpler:
Because of this, transform_angular_velocity() also has a simpler interface:
[21]:
w_body2_world = rbm.transform_angular_velocity(
w_body2_body, outof="body", into="world", timestamps=ts
)
Comparing the transform to the lookup shows no differences in the result:
[22]:
w_body2_world_lookup = rbm.lookup_angular_velocity("body2", "world")
[23]:
compare_velocities(w_body2_world, w_body2_world_lookup, ts, mode="angular")

Transforming the moving frame¶
As before, we can also transform the moving frame:
[24]:
w_body_world = rbm.lookup_angular_velocity("body", "world", represent_in="body")
[25]:
w_body2_world = rbm.transform_angular_velocity(
w_body_world, outof="body", into="body2", what="moving_frame", timestamps=ts,
)
[26]:
w_body2_world_lookup = rbm.lookup_angular_velocity("body2", "world", represent_in="body2")
[27]:
compare_velocities(w_body2_world, w_body2_world_lookup, ts, mode="angular")

You can download
this guide as a
Jupyter notebook.
Estimating transforms from data¶
It is often necessary to estimate transformations between rigid bodies that are not explicitly known. This happens for example when the motion of the same rigid body is measured by different tracking systems that represent their data in different world frames.
Note
The following examples require the matplotlib
library.
[1]:
import matplotlib.pyplot as plt
import numpy as np
import rigid_body_motion as rbm
plt.rcParams["figure.figsize"] = (10, 5)
[2]:
rbm.register_frame("world")
Shortest arc rotation¶
Let’s assume we have two vectors \(v_1\) and \(v_2\):
[3]:
v1 = (1, 0, 0)
v2 = (np.sqrt(2) / 2, np.sqrt(2) / 2, 0)
[4]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame("world", ax=ax)
rbm.plot.vectors(v1, ax=ax, color="y")
rbm.plot.vectors(v2, ax=ax, color="c")
fig.tight_layout()

The quaternion \(r\) that rotates \(v_1\) in the same direction as \(v_2\), i.e., that satisfies:
can be computed with the shortest_arc_rotation() method:
[5]:
rbm.shortest_arc_rotation(v1, v2)
[5]:
array([0.92387953, 0. , 0. , 0.38268343])
The method also works with arrays of vectors. Let’s first construct an array of progressive rotations around the yaw axis with the from_euler_angles() method:
[6]:
r = rbm.from_euler_angles(yaw=np.linspace(0, np.pi / 8, 10))
Now we can rotate \(v_2\) with \(r\). Because we rotate a single vector with multiple quaternions we have to specify one_to_one=False
:
[7]:
v2_arr = rbm.rotate_vectors(r, v2, one_to_one=False)
[8]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame("world", ax=ax)
rbm.plot.vectors(v1, ax=ax, color="y")
rbm.plot.vectors(v2_arr, ax=ax, color="c", alpha=0.3)
fig.tight_layout()

shortest_arc_rotation() now returns an array of quaternions:
[9]:
rbm.shortest_arc_rotation(v1, v2_arr)
[9]:
array([[0.92387953, 0. , 0. , 0.38268343],
[0.91531148, 0. , 0. , 0.40274669],
[0.90630779, 0. , 0. , 0.42261826],
[0.89687274, 0. , 0. , 0.44228869],
[0.88701083, 0. , 0. , 0.46174861],
[0.87672676, 0. , 0. , 0.48098877],
[0.8660254 , 0. , 0. , 0.5 ],
[0.85491187, 0. , 0. , 0.51877326],
[0.84339145, 0. , 0. , 0.53729961],
[0.83146961, 0. , 0. , 0.55557023]])
Best fit rotation¶
In a different scenario, we might have two vectors that are offset by a fixed rotation and are rotating in space:
[10]:
v1_arr = rbm.rotate_vectors(r, v1, one_to_one=False)
[11]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame("world", ax=ax)
rbm.plot.vectors(v1_arr, ax=ax, color="y", alpha=0.3)
rbm.plot.vectors(v2_arr, ax=ax, color="c", alpha=0.3)
fig.tight_layout()

The rotation between the vectors can be found with a least-squares minimization:
This is implemented in the best_fit_rotation() method:
[12]:
rbm.best_fit_rotation(v1_arr, v2_arr)
[12]:
array([ 0.92387953, -0. , -0. , 0.38268343])
Best fit transform¶
In yet another case, we might have two arrays of points (e.g. point clouds) with a fixed transform (rotation and translation) between them:
[13]:
p1_arr = 0.1 * np.random.randn(100, 3)
[14]:
t = np.array((1, 1, 0))
r = rbm.from_euler_angles(yaw=np.pi / 4)
p2_arr = rbm.rotate_vectors(r, p1_arr, one_to_one=False) + t
[15]:
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame("world", ax=ax)
rbm.plot.points(p1_arr, ax=ax, fmt="yo")
rbm.plot.points(p2_arr, ax=ax, fmt="co")
fig.tight_layout()

To estimate this transform, we can minimize:
This algorithm (also called point set registration) is implemented in the best_fit_transform() method:
[16]:
rbm.best_fit_transform(p1_arr, p2_arr)
[16]:
(array([1.00000000e+00, 1.00000000e+00, 1.73472348e-18]),
array([ 9.23879533e-01, -0.00000000e+00, -2.35922393e-16, 3.82683432e-01]))
Iterative closest point¶
The above algorithm only works for known correspondences between points \(p_1\) and \(p_2\) (i.e., each point in p1_arr
corresponds to the same index in p2_arr
). This is not always the case - in fact, something like a point cloud from different laser scans of the same object might yield sets of completely different points. An approximate transform can still be found with the iterative closest point (ICP) algorithm. We can simulate the case of unknown correspondences by randomly
permuting the second array:
[17]:
rbm.iterative_closest_point(p1_arr, np.random.permutation(p2_arr))
[17]:
(array([1.00000000e+00, 1.00000000e+00, 1.28369537e-16]),
array([9.23879533e-01, 7.60327823e-17, 5.55111512e-17, 3.82683432e-01]))
Note that there is a discrepancy in the estimated transform compared to the best fit transform. ICP usually yields better results with a larger number of points that have more spatial structure.
You can download
this guide as a
Jupyter notebook.
Working with xarray¶
rigid_body_motion provides first class support for xarray data types. xarray has several features that make working with motion data convenient:
xarray is designed to combine physical data with metadata such as timestamps.
xarray’s
Dataset
class can be used as a container for timestamped transformations.Arbitrary metadata can be attached to arrays to keep track of e.g. reference frames.
We recommend you to familiarize yourself with xarray before working through this tutorial. Their documentation is an excellent resource for that.
Note
The following examples require the matplotlib
, xarray
and netcdf4
libraries.
[1]:
import rigid_body_motion as rbm
import xarray as xr
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 5)
Loading example data¶
rigid_body_motion includes a recording of head and eye tracking data (using the Intel RealSense T265 as the head tracker and the Pupil Core eye tracker). This data can be loaded with xr.open_dataset
:
[2]:
head = xr.open_dataset(rbm.example_data["head"])
head
Downloading data from 'https://github.com/phausamann/rbm-data/raw/main/head.nc' to file '/home/docs/.cache/pooch/58826f066ba31bf878bd1aeb5305d7d4-head.nc'.
[2]:
<xarray.Dataset> Dimensions: (cartesian_axis: 3, quaternion_axis: 4, time: 66629) Coordinates: * time (time) datetime64[ns] 2020-02-02T00:27:14.300365210 ...... * cartesian_axis (cartesian_axis) object 'x' 'y' 'z' * quaternion_axis (quaternion_axis) object 'w' 'x' 'y' 'z' Data variables: position (time, cartesian_axis) float64 ... linear_velocity (time, cartesian_axis) float64 ... angular_velocity (time, cartesian_axis) float64 ... confidence (time) float64 3.0 3.0 3.0 3.0 3.0 ... 3.0 3.0 3.0 3.0 3.0 orientation (time, quaternion_axis) float64 ...
- cartesian_axis: 3
- quaternion_axis: 4
- time: 66629
- time(time)datetime64[ns]2020-02-02T00:27:14.300365210 .....
- long_name :
- Time
array(['2020-02-02T00:27:14.300365210', '2020-02-02T00:27:14.305364370', '2020-02-02T00:27:14.310361385', ..., '2020-02-02T00:32:47.283535480', '2020-02-02T00:32:47.288533449', '2020-02-02T00:32:47.293531418'], dtype='datetime64[ns]')
- cartesian_axis(cartesian_axis)object'x' 'y' 'z'
- long_name :
- Axis
array(['x', 'y', 'z'], dtype=object)
- quaternion_axis(quaternion_axis)object'w' 'x' 'y' 'z'
- long_name :
- Axis
array(['w', 'x', 'y', 'z'], dtype=object)
- position(time, cartesian_axis)float64...
- long_name :
- Position
- units :
- m
[199887 values with dtype=float64]
- linear_velocity(time, cartesian_axis)float64...
- long_name :
- Linear velocity
- units :
- m/s
[199887 values with dtype=float64]
- angular_velocity(time, cartesian_axis)float64...
- long_name :
- Angular velocity
- units :
- rad/s
[199887 values with dtype=float64]
- confidence(time)float64...
array([3., 3., 3., ..., 3., 3., 3.])
- orientation(time, quaternion_axis)float64...
- long_name :
- Orientation
[266516 values with dtype=float64]
The dataset includes position and orientation as well as angular and linear velocity of the tracker. Additionally, it includes the physical dimensions time
, cartesian_axis
(for position and velocities) and quaternion_axis
(for orientation). Let’s have a look at the position data:
[3]:
head.position
[3]:
<xarray.DataArray 'position' (time: 66629, cartesian_axis: 3)> [199887 values with dtype=float64] Coordinates: * time (time) datetime64[ns] 2020-02-02T00:27:14.300365210 ... 2... * cartesian_axis (cartesian_axis) object 'x' 'y' 'z' Attributes: long_name: Position units: m
- time: 66629
- cartesian_axis: 3
- ...
[199887 values with dtype=float64]
- time(time)datetime64[ns]2020-02-02T00:27:14.300365210 .....
- long_name :
- Time
array(['2020-02-02T00:27:14.300365210', '2020-02-02T00:27:14.305364370', '2020-02-02T00:27:14.310361385', ..., '2020-02-02T00:32:47.283535480', '2020-02-02T00:32:47.288533449', '2020-02-02T00:32:47.293531418'], dtype='datetime64[ns]')
- cartesian_axis(cartesian_axis)object'x' 'y' 'z'
- long_name :
- Axis
array(['x', 'y', 'z'], dtype=object)
- long_name :
- Position
- units :
- m
As you can see, this is a two-dimensional array (called DataArray
in xarray) with timestamps and explicit names for the physical axes in cartesian coordinates.
xarray also provides a straightforward way of plotting:
[4]:
head.linear_velocity.plot.line(x="time")
[4]:
[<matplotlib.lines.Line2D at 0x7fe145a972d0>,
<matplotlib.lines.Line2D at 0x7fe145a7fc10>,
<matplotlib.lines.Line2D at 0x7fe143a0df90>]

The example recording is from a test subject wearing the combined head/eye tracker while walking twice around a building. The head tracking data is represented in a world-fixed reference frame whose origin is at the head tracker’s location at the start of the recording.
In the next step, we will leverage rigid_body_motion’s powerful reference frame mechanism to transform the linear velocity from world to tracker coordinates.
Reference frame interface¶
As in the previous tutorial, we begin by registering the world frame as root of the reference frame tree:
[5]:
rbm.register_frame("world")
Timestamped reference frames can be easily constructed from Dataset
instances with the ReferenceFrame.from_dataset() method. We need to specify the variables representing translation and rotation of the reference frame as well as the name of the coordinate containing timestamps and the parent frame:
[6]:
rf_head = rbm.ReferenceFrame.from_dataset(
head, "position", "orientation", "time", parent="world", name="head"
)
Let’s register this reference frame so that we can use it easily for transformations:
[7]:
rf_head.register()
Now we can use transform_linear_velocity() to transform the linear velocity to be represented in tracker coordinates:
[8]:
v_head = rbm.transform_linear_velocity(
head.linear_velocity, outof="world", into="head", what="representation_frame"
)
v_head.plot.line(x="time")
[8]:
[<matplotlib.lines.Line2D at 0x7fe1439c01d0>,
<matplotlib.lines.Line2D at 0x7fe1420f18d0>,
<matplotlib.lines.Line2D at 0x7fe1420f1390>]

We now see a mean linear velocity of ~1.4 m/s in the negative z direction. This is due to the coordinate system defined by the RealSense T265 where the positive z direction is defined towards the back of the device.
You can download
this guide as a
Jupyter notebook.
ROS integration¶
rigid_body_motion supports certain ROS functionality, provided the Python environment has been set up with the required ROS packages. This guide assumes that you are at least somewhat familiar with ROS concepts such as nodes, publishers/subscribers and messages. If not, the ROS tutorials are a good place to start.
You also need to set up a couple of dependencies which can be done very conveniently if you are using an Anaconda Python distribution. See the ROS dependencies installation guide for further information.
Note
The following examples require the pooch
, xarray
, netcdf4
and ipywidgets
libraries.
[1]:
import numpy as np
import pandas as pd
import rigid_body_motion as rbm
Loading data from rosbag files¶
Data can be loaded from rosbag files into numpy arrays. So far, geometry_msgs/TransformStamped
and nav_msgs/Odometry
messages are supported. This is done through the RosbagReader class:
[2]:
reader = rbm.ros.RosbagReader(rbm.example_data["rosbag"])
The reader can be used as a context manager to facilitate opening and closing of the rosbag. The get_topics_and_types
method returns a dict with topic names and corresponding message types:
[3]:
with reader:
info = reader.get_topics_and_types()
info
[3]:
{'/pupil/left_eye/transform': 'geometry_msgs/TransformStamped',
'/pupil/right_eye/transform': 'geometry_msgs/TransformStamped',
'/t265/transform': 'geometry_msgs/TransformStamped'}
Note
If you get an ModuleNotFoundError: No module named 'rosbag'
at this point, there is an issue with the ROS dependencies. See the ROS dependencies troubleshooting guide to fix this.
The data included in the example rosbag is from a head/eye tracking study and contains head-in-world pose estimated by the Intel RealSense T265 as well as eye-in-head pose for both eyes estimated by the Pupil Core eye tracker.
The load_messages
method returns a dict with the data from a specified topic:
[4]:
with reader:
head = reader.load_messages("/t265/transform")
head
[4]:
{'timestamps': array([1.58060323e+09, 1.58060323e+09, 1.58060323e+09, ...,
1.58060357e+09, 1.58060357e+09, 1.58060357e+09]),
'position': array([[15.9316, 0.8211, 10.5429],
[15.9354, 0.8208, 10.5382],
[15.9393, 0.8204, 10.5335],
...,
[29.8883, 2.8952, 7.6317],
[29.8888, 2.8943, 7.6249],
[29.8892, 2.8935, 7.6182]]),
'orientation': array([[-0.9687, 0.0917, 0.2306, 0.0039],
[-0.969 , 0.0915, 0.2295, 0.005 ],
[-0.9693, 0.0912, 0.2285, 0.0061],
...,
[-0.9915, 0.0915, 0.0929, -0.0022],
[-0.9914, 0.0922, 0.0927, -0.0017],
[-0.9913, 0.0932, 0.0925, -0.001 ]])}
Now we can construct a reference frame tree with this data:
[5]:
rbm.register_frame("world")
The T265 uses the VR coordinate convention (x right, y up, z towards the back of the device) which differs from the default ROS convention (x forward, y left, z up):
[6]:
R_T265_ROS = np.array([[0.0, 0.0, -1.0], [-1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
rbm.ReferenceFrame.from_rotation_matrix(
R_T265_ROS, parent="world", name="t265/world"
).register()
The recorded data describes the transformation from the T265 world frame to the tracker frame:
[7]:
rbm.register_frame(
"t265/tracker",
parent="t265/world",
translation=head["position"],
rotation=head["orientation"],
timestamps=pd.to_datetime(head["timestamps"], unit="s"),
)
For visualization, we define an additional "head"
frame that represents the tracker pose with the ROS coordinate convention:
[8]:
rbm.ReferenceFrame.from_rotation_matrix(
R_T265_ROS, parent="t265/tracker", name="head", inverse=True,
).register()
[9]:
rbm.render_tree("world")
world
└── t265/world
└── t265/tracker
└── head
Visualization with RViz¶
This section will show you how to broadcast reference frame transforms on the /tf
topic as well as publish other messages that are useful for visualization in RViz. If you are not familiar with RViz and/or tf, check out the RViz user guide and the tf package documentation. You can download an .rviz
file where all topics created in the following are already set up here.
We start by creating a node for this notebook with the init_node() method. This method will also automatically start a roscore
when start_master=True
and another ROS master isn’t already running:
[10]:
master = rbm.ros.init_node("rbm_vis", start_master=True)
started roslaunch server http://DESKTOP:41375/
ros_comm version 1.15.9
SUMMARY
========
PARAMETERS
* /rosdistro: noetic
* /rosversion: 1.15.9
NODES
auto-starting new master
process[master]: started with pid [15323]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to master
process[rosout-1]: started with pid [15333]
started core service [/rosout]
Transforms between reference frames can easily be broadcast on the /tf
topic with the ReferenceFrameTransformBroadcaster class:
[11]:
tf_world_head = rbm.ros.ReferenceFrameTransformBroadcaster("head", base="world")
When calling publish the transform between world and head frame will be broadcast on the /tf
topic. Since the head frame is a moving frame this will broadcast the first valid transform between the two frames by default:
[12]:
tf_world_head.publish()
RViz should now show both frames:
The broadcaster stores all valid transforms between the two frames in the translation
, rotation
and timestamps
attributes:
[13]:
tf_world_head.translation.shape
[13]:
(66629, 3)
You can broadcast the transform between two frames at different points in time by specifying an index into these arrays as an argument to publish:
[14]:
tf_world_head.publish(1000)
Note
When “going back in time”, i.e., broadcasting transforms with timestamps older than the latest broadcast timestamp, RViz will not update the tf display and you may get a TF_OLD_DATA
warning in the console. When this happens, click on the “Reset” button in the lower left corner in RViz.
The entire transformation between moving frames can be visualized with the ReferenceFrameMarkerPublisher which publishes the translation of all valid timestamps as a visualization_msgs/Marker
message on the /<frame_name>/path
topic:
[15]:
marker_publisher = rbm.ros.ReferenceFrameMarkerPublisher("head", base="world")
marker_publisher.publish()
[16]:
import rospy
rospy.get_published_topics("/head")
[16]:
[['/head/path', 'visualization_msgs/Marker']]
RViz should now show the trajectory of the head frame as a white line:
Next, we will load data from the Pupil Core eye tracker to demonstrate how multiple moving frames can be visualized. With the load_dataset method, the data is imported as an xarray.Dataset
and cache=True
enables local caching in the netCDF4 format for faster loading.
[17]:
with reader:
left_eye = reader.load_dataset("/pupil/left_eye/transform", cache=True)
left_eye = left_eye.dropna("time")
The transform between the T265 and the Pupil Core scene camera was determined in the study with a camera calibration routine and is provided here as hard-coded parameters:
[18]:
t_t265_pupil = (24.5e-3, -29e-3, 0.0)
r_t265_pupil = (-0.00125, -1.0, 6.3463e-05, 3.977e-06)
rbm.register_frame(
"pupil/scene_cam",
parent="t265/tracker",
translation=t_t265_pupil,
rotation=r_t265_pupil,
)
The reference frame of the left eye is constructed from the Dataset:
[19]:
rbm.ReferenceFrame.from_dataset(
left_eye,
"position",
"orientation",
"time",
parent="pupil/scene_cam",
name="pupil/left_eye",
).register()
The eye tracker data uses yet another coordinate convention (x right, y down, z forward) which we need to take into account when visualizing the eye frame:
[20]:
R_PUPIL_ROS = np.array([[0.0, -1.0, 0.0], [0.0, 0.0, -1.0], [1.0, 0.0, 0.0]])
rbm.ReferenceFrame.from_rotation_matrix(
R_PUPIL_ROS, parent="pupil/left_eye", name="left_eye"
).register(update=True)
[21]:
rbm.render_tree("world")
world
└── t265/world
└── t265/tracker
├── head
└── pupil/scene_cam
└── pupil/left_eye
└── left_eye
Now we create another broadcaster for the transform between head and eye frame. With publish_pose=True
, the broadcaster also publishes a geometry_msgs/PoseStamped
message on the /<frame_name>/pose
topic. This message can be visualized in RViz as an arrow which avoids cluttering for frames that are close together. In order to publish messages in sync with the world/head transform, we specify subscribe="head"
. This way, the broadcaster checks for new messages on the /tf
topic
where the child_frame_id
is "head"
and publishes its own transform with the closest timestamp:
[22]:
tf_head_left_eye = rbm.ros.ReferenceFrameTransformBroadcaster(
"left_eye", base="head", publish_pose=True, subscribe="head"
)
Calling the spin method dispatches the broadcaster to a separate thread where it will keep checking for new world/head transforms:
[23]:
tf_head_left_eye.spin()
Finally, the play_publisher method provides a notebook widget to play back data from a broadcaster. With step=2
it broadcasts every second transform and with speed=0.5
the data is played back at half the recorded speed:
[24]:
rbm.ros.play_publisher(tf_world_head, step=2, speed=0.5)
After pressing the play button, you should see the head frame moving along the white path, with the eye-in-head pose drawn as a yellow arrow:
Finally, if you have started the ROS master from Python, you should shut it down at the end:
[25]:
master.shutdown()
[rosout-1] killing on exit
[master] killing on exit
API Reference¶
Top-level functions¶
Module: rigid_body_motion
Transformations¶
Transform an array of vectors between reference frames. |
|
Transform an array of points between reference frames. |
|
Transform an array of quaternions between reference frames. |
|
Transform motion between coordinate systems. |
|
Transform an array of angular velocities between frames. |
|
Transform an array of linear velocities between frames. |
Reference Frames¶
A three-dimensional reference frame. |
|
Register a new reference frame in the registry. |
|
Remove a reference frame from the registry. |
|
Clear the reference frame registry. |
|
Render a reference frame tree. |
Coordinate Systems¶
Transform cartesian to polar coordinates in two dimensions. |
|
Transform polar to cartesian coordinates in two dimensions. |
|
Transform cartesian to spherical coordinates in three dimensions. |
|
Transform spherical to cartesian coordinates in three dimensions. |
Lookups¶
Look up transformation from one frame to another. |
|
Look up pose of one frame wrt a reference. |
|
Estimate linear and angular velocity of a frame wrt a reference. |
|
Estimate linear velocity of a frame wrt a reference. |
|
Estimate angular velocity of a frame wrt a reference. |
Estimators¶
Estimate linear velocity from a time series of translation. |
|
Estimate angular velocity from a time series of rotations. |
|
Estimate the shortest-arc rotation between two arrays of vectors. |
|
Least-squares best-fit rotation between two arrays of vectors. |
|
Least-squares best-fit transform between two arrays of vectors. |
|
Iterative closest point algorithm matching two arrays of vectors. |
Utility functions¶
Construct quaternions from Euler angles. |
|
Quaternion inverse. |
|
Quaternion multiplication. |
|
Quaternion mean. |
|
Quaternion interpolation. |
|
Rotate an array of vectors by an array of quaternions. |
Plotting¶
Module: rigid_body_motion.plot
Plot a 3D coordinate system representing a static reference frame. |
|
Plot an array of 3D points. |
|
Plot an array of quaternions. |
|
Plot an array of 3D vectors. |
ROS interface¶
Module: rigid_body_motion.ros
Reader for motion topics from rosbag files. |
|
Writer for motion topics to rosbag files. |
|
TF broadcaster for the transform of a reference frame wrt another. |
|
Publisher for the translation of a reference frame wrt another. |
|
Wrapper class for tf2_ros.Buffer. |
Register a client node with the master. |
|
Interactive widget for playing back messages from a publisher. |
xarray Accessors¶
Quaternion interpolation. |
|
Quaternion inverse. |
Class member details¶
reference_frames¶
ReferenceFrame¶
Construct a reference frame from a Dataset. |
|
Construct a reference frame from a translation DataArray. |
|
Construct a reference frame from a rotation DataArray. |
|
Construct a static reference frame from a rotation matrix. |
Transform array of points from this frame to another. |
|
Transform array of quaternions from this frame to another. |
|
Transform array of vectors from this frame to another. |
|
Transform array of angular velocities from this frame to another. |
|
Transform array of linear velocities from this frame to another. |
Look up the transformation from this frame to another. |
|
Estimate angular velocity of this frame wrt a reference. |
|
Estimate linear velocity of this frame wrt a reference. |
|
Estimate linear and angular velocity of this frame wrt a reference. |
Register this frame in the registry. |
|
Remove this frame from the registry. |
Development roadmap¶
The vision for this project is to provide a universal library for analysis of recorded motion. Some of the categories where a lot of features are still to be implemented are detailed below.
IO functions¶
Import routines for a variety of data formats from common motion capture and IMU systems.
Estimators¶
Common transform estimators such as iterative closest points (ICP).
Pandas support¶
The same metadata-aware mechanism for pandas as for xarray.
ROS integration¶
Leverage tools provided by ROS to supplement functionality of the library. One example would be 3D plotting with RViz. ROS has recently made its way to conda forge, and packages such as jupyter-ros facilitate integration with modern Python workflows.
Units¶
Support for unit conversions and unit-aware transforms. Possibly leveraging packages such as pint or unit support in xarray.
Contributing¶
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions¶
Report Bugs¶
Report bugs at https://github.com/phausamann/rigid-body-motion/issues.
If you are reporting a bug, please include:
Your operating system name and version.
Any details about your local setup that might be helpful in troubleshooting.
Detailed steps to reproduce the bug.
Fix Bugs¶
Look through the GitHub issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants to implement it.
Implement Features¶
Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it.
Write Documentation¶
rigid-body-motion could always use more documentation, whether as part of the official rigid-body-motion docs, in docstrings, or even on the web in blog posts, articles, and such.
Submit Feedback¶
The best way to send feedback is to file an issue at https://github.com/phausamann/rigid-body-motion/issues.
If you are proposing a feature:
Explain in detail how it would work.
Keep the scope as narrow as possible, to make it easier to implement.
Remember that this is a volunteer-driven project, and that contributions are welcome :)
Get Started!¶
Ready to contribute? Here’s how to set up rigid-body-motion for local development.
Fork the rigid-body-motion repo on GitHub.
Clone your fork locally:
$ git clone git@github.com:your_name_here/rigid-body-motion.git
The recommended way of setting up the project is in a conda environment:
$ conda env create
Create a branch for local development:
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
When you’re done making changes, check that your changes pass the tests:
$ conda activate rbm $ pytest
The project uses black for code formatting, isort for import sorting and flake8 for linting. You can set up these checks as pre-commit hooks:
$ conda activate rbm $ conda install pre-commit $ pre-commit install
Note that the project currently uses black version 19.10b0.
Commit your changes and push your branch to GitHub:
$ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature
Submit a pull request through the GitHub website.
Pull Request Guidelines¶
Before you submit a pull request, check that it meets these guidelines:
The pull request should include tests.
If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in HISTORY.rst.
The pull request should work for Python 3.6, 3.7, 3.8 and 3.9. See the checks at the bottom of the pull request page to resolve any issues.
Deploying¶
A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:
$ bumpversion patch # possible: major / minor / patch
$ git push
Credits¶
Development Lead¶
Peter Hausamann <peter@hausamann.de>
Contributors¶
None yet. Why not be the first?
History¶
0.9.1 (January 13th, 2022)¶
Bug fixes & improvements¶
Fixed package installation through pip (version 0.9.0 is no longer available).
0.9.0 (December 29th, 2021)¶
Breaking changes¶
Dropped support for Python 3.6.
Bug fixes & improvements¶
Fixed issue with matplotlib versions >= 3.5.
0.8.0 (May 27th, 2021)¶
New features¶
New
ros.init_node
method to initialize a ROS node and optionally start a ROS master.
Bug fixes & improvements¶
All ROS dependencies are now lazily imported.
0.7.0 (May 19th, 2021)¶
New features¶
New
from_euler_angles
utility method.
Bug fixes & improvements¶
Importing ROS interface classes will not fail silently anymore and instead show the traceback of the import error.
0.6.0 (May 17th, 2021)¶
Breaking changes¶
Example data is now fetched via the
pooch
library and no longer a part of the package itself.
New features¶
New
io
module for import/export methods.New
ros.RosbagWriter
class for writing rosbag files.
0.5.0 (March 16th, 2021)¶
Breaking changes¶
Top-level reference frame transform and lookup methods now all accept a
return_timestamps
argument that isFalse
by default. Previously, methods would return timestamps only if the result of the transformation was timestamped. This does not affect the xarray interface.lookup_transform
now returns the correct transformation from the base frame to the target frame (instead of the other way around).ReferenceFrame.get_transformation
is deprecated and replaced byReferenceFrame.lookup_transform
.
New features¶
New
plot
module with plotting methods for static reference frames and arrays of points, quaternions and vectors.New
lookup_pose
method that calculates the pose of a frame wrt another.
Bug fixes & improvements¶
Fixed
"reference_frame"
attribute incorrectly set bytransform_vectors
.
0.4.1 (February 18th, 2021)¶
Bug fixes & improvements¶
Fixed
transform_coordinates
failing when spatial dimension is first axis of array.Fixed
transform_linear_velocity
andtransform_angular_velocity
failing when reference frame or moving frame is transformed across only static transforms.Added
allow_static
parameter tolookup_twist
,lookup_angular_velocity
andlookup_linear_velocity
to return zero velocity and no timestamps across only static transforms.
0.4.0 (February 11th, 2021)¶
New features¶
New
lookup_linear_velocity
andlookup_angular_velocity
top-level methods.New
render_tree
top-level method for printing out a graphical representation of a reference frame tree.lookup_twist
now accepts amode
parameter to specify the mode for angular velocity calculation.
Bug fixes & improvements¶
Fixed a bug where estimated angular velocity was all NaN when orientation contained NaNs.
0.3.0 (December 8th, 2020)¶
New features¶
Reference frames with timestamps now accept the
discrete
parameter, allowing for transformations to be fixed from their timestamp into the future.rbm
accessor for DataArrays implementingqinterp
andqinv
methods.New
best_fit_rotation
andqinterp
top-level methods.
Bug fixes & improvements¶
Refactor of internal timestamp matching mechanism defining a clear priority for target timestamps. This can result in slight changes of timestamps and arrays returned by transformations but will generally produce more accurate results.
Added
mode
andoutlier_thresh
arguments toestimate_angular_velocity
.Fixed issues with
iterative_closest_point
.
0.2.0 (October 22nd, 2020)¶
New features¶
New
estimate_linear_velocity
andestimate_angular_velocity
top-level methods.New
qmul
top-level method for multiplying quaternions.
0.1.1 (September 17th, 2020)¶
Bug fixes¶
Fix transformations failing for DataArrays with non-numeric coords.
0.1.0 (September 17th, 2020)¶
First release