Vortex Sensors

 

Vortex provides several sensors for measuring distances between geometries, detecting overlaps and more.

In order to add a sensor to your content in the editor:

  1. Select Sensors in the Toolbox.

  2. Double-click the desired sensor to add it to the 3D View.

  3. Edit the name of the sensor in its Properties panel.

The following sensor extensions are available:

Distance Sensors

Geometry Distance Sensor

A Geometry Distance sensor measures the closest distance between specified collision geometries in your scene or mechanism. The sensor compares each geometry specified in the Geometries 1 group to each geometry in the Geometries 2 group, and returns the distance between the two closest geometries.

In the Properties panel, configure the following fields:

  • Active: Select this box to compute the distance between the closest collision geometry in the Geometries 1 group to the nearest collision geometry in Geometries 2. Deactivating this input stops any distance measurements, and will reset the distance information outputs.

  • Geometries 1 and Geometries 2:

    • Size: Adjust this field to specify how many collision geometries you want to include in each grouping of geometries. Use the Browse button at the end of each row to bring up a dialog box where you can select a geometry from the Explorer panel.

Once you have specified all the collision geometries in both groups, press the play button in the editor to run the simulation, and consult the following outputs:

  • Distance: Displays the minimum distance (in meters) between the geometry in the Geometries 1 group nearest to the closest geometry in Geometries 2.

  • Point 1: The position of the point on the collision geometry from Geometries 1 nearest to the closest geometry from Geometries 2.

  • Point 2: The position of the point on the collision geometry from Geometries 2 nearest to the closest geometry from Geometries 1.

  • Geometry Index 1: Indicates the row in Geometries 1 that contains the geometry nearest to the closest geometry from Geometries 2.

  • Geometry Index 2: Indicates the row in Geometries 2 that contains the geometry nearest to the closest geometry from Geometries 1.

  • Geometry Name 1: Displays the name of the collision geometry referred to in Geometry Index 1.

  • Geometry Name 2: Displays the name of the collision geometry referred to in Geometry Index 2.

Optionally, you can display a visual aid in the 3D View that draws a line connecting the two collision geometries at their closest points (see image below). To do this, right-click the eye icon next to the Geometry Distance sensor in the Explorer panel, and enable Accessory.

Part Distance Sensor

A Part Distance sensor measures the closest distance between specified Parts in your scene or mechanism.

In the Properties panel, configure the following fields:

  • Active: Select this box to compute the distance between Part 1 and Part 2. Deactivating this input stops any distance measurements, and will reset the distance information outputs.

  • Part 1: Use the Browse button at the end of this field to bring up a dialog box where you can select the first part from the Explorer panel.

  • Part 2: Use the Browse button at the end of this field to bring up a dialog box where you can select the second part from the Explorer panel.

Once you have specified the two parts, press the Play button to run the simulation, and consult the following outputs:

  • Distance: Displays the distance (in meters) between the collision geometry within Part 1 closest to the nearest collision geometry within Part 2. If a part does not contain any collision geometries, the part's origins are used.

  • Point 1: The position of the point on the collision geometry from Part 1 closest to Part 2.

  • Point 2: The position of the point on the collision geometry from Part 2 closest to Part 1.

  • Geometry Name 1: Displays the name of the collision geometry from Part 1 closest to Part 2.

  • Geometry Name 2: Displays the name of the collision geometry from Part 2 closest to Part 1.

Optionally, you can display a visual aid in the 3D View that draws a line connecting the two parts at their closest points (see image below). To do this, right-click the eye icon next to the Part Distance sensor in the Explorer panel, and enable Accessory.

Limitations

Not all collision geometries combinations are supported by the part distance and geometry distance sensors. Refer to the following table for the complete list:

Distance Calculate?

Sphere

Box

Cylinder

Capsule

Triangle Mesh

Convex Mesh

Distance Calculate?

Sphere

Box

Cylinder

Capsule

Triangle Mesh

Convex Mesh

Sphere

Yes

Yes

Yes

Yes

No

Yes

Box

Yes

Yes

No

Yes

No

Yes

Cylinder

Yes

No

No

No

No

Yes

Capsule

Yes

Yes

No

Yes

No

Yes

Triangle Mesh

No

No

No

No

No

No

Convex Mesh

Yes

Yes

Yes

Yes

No

Yes

Plane

Yes

Yes

Yes

Yes

No

No

Kinematics Sensor

A Kinematics sensor computes the position, orientation, velocity and acceleration of Attachment Point. A desired Attachment Point needs to be assigned to the sensor. The sensor's outputs will be updated during simulation based on the Attachment Point's position and orientation. The sensor can be enabled or disabled during simulation. For more detail on Attachment point, refer to Attachments.

Outputs:

Name 

Description

Name 

Description

Position

Position in meters of the Attachment Point in world coordinates.

Orientation

Orientation of the Attachment Point represented as roll, pitch and yaw rotations in degrees. In the Properties panel, the values are shown in this order (roll, pitch, yaw). The x-axis of the Attachment Point defines the forward direction.

  • Roll is the rotation about the Attachment Point's x-axis.

  • Pitch is the rotation about the Attachment Point's y-axis

  • Yaw is the rotation about the Attachment Point's z-axis.

Linear Velocity

Linear velocity vector in m/s at the position of the Attachment Point, in world coordinates.

Relative Linear Velocity

Linear velocity vector in m/s at the position of the Attachment Point, in local coordinates of the Attachment Point.

Angular Velocity

Angular velocity vector in degrees/s at the position of the Attachment Point, in world coordinates.

Relative Angular Velocity

Angular velocity vector in degrees/s at the position of the Attachment Point, in local coordinates of the Attachment Point.

Linear Acceleration

Linear acceleration vector in m/s2 at the position of the Attachment Point, in world coordinates.

Relative Linear Acceleration

Linear acceleration vector in m/s2 at the position of the Attachment Point, in local coordinates of the Attachment Point.

Angular Acceleration

Angular acceleration vector in degrees/s2 at the position of the Attachment Point, in world coordinates.

Relative Angular Acceleration

Angular acceleration vector in degrees/s2 at the position of the Attachment Point, in local coordinates of the Attachment Point.

Overlap Sensors

Two sensors that detect overlaps are related. These are the Intersection Sensor, which detects overlaps of pairs of geometries, and the Raycast Sensor, which detects overlaps of geometries with a specified ray. These sensors can be used in conjunction with Sensor Triggers in order to detect such overlaps. In this context, a "sensor" triggers a "sensor trigger". The following section describes these concepts in more details.

Intersection Sensor

An Intersection sensor detects when a designated sensor trigger object collides (overlaps) with a collision geometry used as a sensor. A single sensor can contain multiple objects.

In the Properties panel, configure the following fields:

  • Active: Check this box to toggle whether the sensor is in effect.

  • Select Intersecting with Everything to have this sensor report intersections with everything. Leave this field deselected to have it report intersections only with objects labeled by a sensor trigger.

  • Labels: Adjust the Size field to specify the number of labels given to this sensor. In the available slots, add custom text. Matching these labels to the same labels in a sensor trigger creates an interface between the two where you can control which sensor/trigger combinations result in a positive "Has Intersected" result in the sensor's output when contact is made.
    Leaving a label blank acts as a wild card, matching this sensor with all labeled triggers, regardless of the triggers' labels.

  • Sensor Extensions: Adjust the Size field to specify the number of objects whose interactions you want to associate to this sensor. After adding the appropriate number of extension fields, press the ellipsis button at the end of the row, then browse to and select the object you want to link to this sensor.

For example, if you wanted to detect when a car is in contact with the ground, you could add an Intersection sensor and add each wheel as a sensor extension. In the corresponding sensor trigger, you would add the terrain as a trigger extension. Finally, you would add matching labels to both the sensor and the trigger. When at least one of the four tires is in contact with the ground, the "Has Intersected" output would become true, meaning contact is made.

Raycast Sensor

A Raycast sensor detects when a designated trigger object comes into contact with an imaginary line segment of a user-defined length. Unlike the Intersection sensor, the Raycast sensor is placed at a user-defined point in space. If you want to tie the sensor to an object, you can use the Connection Editor to link its transform to those of another object.

  • Max Distance: Specifies the maximum distance ahead of the sensor beyond which an intersection is no longer detected.

  • Active: Check this box to toggle whether the trigger is in effect.

  • Select Casting Ray on Everything to have this ray report intersections with everything. Leave this field deselected to have it report intersections only with objects labeled by a sensor trigger.

  • Labels: Adjust the Size field to specify the number of labels given to this trigger. In the available slots, add custom labels. Matching these labels with the same labels you added a sensor trigger creates an interface between the two where can control which sensor/triggers combinations result in a positive "Has Intersected" result in the sensor's Properties panel's output.

     

For example, if you wanted to add a proximity sensor to a car to detect when it comes too close to a pedestrian, you could add a Raycast sensor and, using the Connection Editor, link it to the car, pointing it ahead. Next, you would set the maximum distance past which proximity to a pedestrian is no longer a concern. In the corresponding sensor trigger, you would add the pedestrians in your scene as trigger extensions. Finally, you would add matching labels to both the sensor and the trigger. When a pedestrian comes within the range of the maximum distance, the "Has Intersected" output would become true.

Sensor Trigger

A sensor trigger detects when a designated sensor (Raycast Sensor or Intersection Sensor) intersects with it.

In the Properties panel, configure the following fields:

  • Active: Check this box to toggle whether the trigger is in effect.

  • Labels: Adjust the Size field to specify the number of labels given to this trigger. In the available slots, add custom labels. Matching these labels with the same labels you added in a sensor creates an interface between the two where you can control which sensor/triggers combinations result in a positive "Has Intersected" result in the sensor's Properties panel's output.

  • Trigger Extensions: Adjust the Size field to specify the number of objects you want to set off a collision with the sensor. After adding the appropriate number of extension fields, press the ellipsis button at the end of the row, then browse to and select the object you want to link to this trigger.

     

Advanced: Creating Sensors and Triggers in Python Scripts

Raycast Sensors, Intersection Sensors, and Sensor Triggers can be created directly in python. In this case, additional information can be obtained from the sensors, which is otherwise not available in the high-level extensions. Among others, this includes detailed contact information such as contact positions and normals for every detected intersection.

The following example code snippets demonstrate how to create an Intersection Sensor paired with a Sensor Trigger in Python 3 scripts.

In both examples, the script requires the following parameters and outputs, which need to be added to the script manually in the editor.

Parameters

Type

Description

Parameters

Type

Description

TriggerExt

Extension Pointer

Represents the extension that will be used as sensor trigger.

Can be any object that contains geometry, such as a Part, a Collision Geometry, or even a Cable or Vehicle.

SensorExt

Extension Pointer

Represents the extension that will be used as an intersection sensor.

Can be any object that contains geometry, such as a Part, a Collision Geometry, or even a Cable or Vehicle.

Outputs

Type

Description

Outputs

Type

Description

Colliding

Boolean

Specifies whether a collision between TriggerExt and SensorExt (see above) was detected.

Contacts

Integer

the number of contacts found by the intersection sensor in every step.

Penetration

Double

Sum of penetration of all contacts that were found by the intersection sensor in every step.

Python 3 example

from Vortex import * def on_simulation_start(extension): extension.trigger = SensorTrigger() extension.trigger.setTriggerExtension(self.parameters.TriggerExt.value) extension.trigger.addLabel('Blue') extension.sensor = IntersectionSensor() extension.sensor.setSensorExtension(self.parameters.SensorExt.value) extension.sensor.setCollectingIntersections(True) extension.sensor.addLabel('Blue') def on_simulation_stop(extension): extension.sensor = None extension.trigger = None def post_step(extension): total_pen = 0 total_con = 0 for i in extension.sensor.getIntersections(): total_con += len(i.contacts) for c in i.contacts: total_pen += c.penetration extension.outputs.Colliding.value = extension.sensor.hasIntersected() extension.outputs.Contacts.value = total_con extension.outputs.Penetration.value = total_pen

Electro-optical Sensor

Vortex Studio comes with a set of electro-optical sensors which are described in detail below. Here is an overview of the output produced by the currently supported sensors.

Lidar Sensor

Lidar Sensor

RGB Camera

Depth Camera

Lidar Sensor

The Lidar Sensor allows capturing a point cloud of the virtual environment in real-time. It can be placed in your scene or mechanism and attached to moving machines.

The Lidar Sensor provides a multitude of parameters which, among others, let you configure its sampling resolution and frequency among others. In the Properties panel, you can access the sensor's interface.

 

Parameters

Description

Parameters

Description

Number of channels

The number of vertical stacks of lasers of the Lidar, distributed uniformly across the vertical field of view (FOV). The valid value range for this parameter is 1 to 128.

Range

Range of the lasers of the Lidar in meters. The valid value range for this parameter is 1 to 10000 meters.

Sampling Frequency

The number of rotations (complete scans) per second of the Lidar. The valid value range for this parameter is 1 to 200.

Note that setting the sampling frequency to a value greater than the simulation update frequency may result in loss of LiDAR data:
Only the data of the latest scan is made available through the extension output meaning that if multiple scans are done in one simulation step
then the data of all but the final scan will be discarded.

 

Horizontal FOV Start

The horizontal angle from the forward axis (x-axis) along the horizontal plane (xy-plane; formed by forward and side axis) at which sampling starts.
The valid value range for this parameter is -180 to 180 degrees

Horizontal FOV Length

The horizontal angular length of the laser scan. The end angle where sampling stops is equal to Horizontal FOV Start + this value.
The valid value range is 0 to 360 degrees

Vertical FOV Upper Bound

The maximum vertical angle from the horizontal plane (xy-plane; formed by forward and side axis). 
The valid value range for this parameter is -180 to 180 degrees and the value must always be greater or equal to the value of Vertical FOV Lower Bound

Vertical FOV Lower Bound

The minimum vertical angle from the horizontal plane (xy-plane; formed by forward and side axis).
The valid value range for this parameter is -180 to 180 degrees and the value must always be lesser or equal to the value of Vertical FOV Upper Bound.

Horizontal Step Resolution

The number of ray casts across the horizontal field of view (FOV). The number is common to each channel.
The valid value range of this parameter is 1 to n where n is a value defined relative to the value chosen for the Number of channels parameter.
The maximum number of points that can be generated per scan (horizontally and vertically combined) is 262144 (128x2048).
Accordingly, the maximum value n for the horizontal step resolution parameter is defined as n = 262144 / Number of channels rounded down to the nearest integer.

Output as Distance Field

Set the output as a distance field if enabled or a point cloud otherwise. When enabled, the Distance Field output is filled, otherwise, the Point Cloud output is filled

Inputs

Description

Inputs

Description

Parent Transform

Parent transformation matrix of the sensor.
Connect the Parent Transform to the World Transform output of some other mobile object (e.g., a Part) in order to make the Lidar Sensor follow this parent object.

Local Transform

Local transformation matrix of the sensor. Use it to place the sensor relative to its parent object.

The sensor follows the typical Vortex convention: X-forward, Y-left, and Z-up.

Point cloud Visualization

Activate or deactivate the visualization of the generated point cloud during the simulation

Outputs

Description

Outputs

Description

Point cloud

The sampled data is provided as an output array of 3D points. Accessible, among others, via a Python script extension. 

 

Output Storage Convention

The 3D point at index 0 of the output array is always mapped to the laser located at the bottom-left of the Lidar laser grid. Whilst the last 3D point contained in the output array is associated with the laser located at the top-right of the Lidar laser grid. The remaining data is stored in a column-major fashion.

 

Distance Field

The sampled data is provided as an output array of scalar values. The position of a laser in the lidar grid of lasers is directly associable with an index in the output array. The scalar value obtainable from the output array represents the distance traveled by the associated laser from the lidar to the hit location. The 3D point can be reconstructed from this index (which yields the laser ray cast direction vector by associativity) and the scalar value (the distance traveled along the vector obtained from the index). 

 

Output Storage Convention

The scalar value at index 0 of the output array is always mapped to the laser located at the bottom-left of the Lidar laser grid. Whilst the last scalar value contained in the output array is associated with the laser located at the top-right of the Lidar laser grid. The remaining data is stored in a column-major fashion.

 

Limitations

Please note the following limitations in the current state of the Lidar Sensor extension.

  • Known issue If laser beams higher than 35 degrees or lower then -35 degrees hit objects, the returned position can be erroneous. For now, the Lidar Vertical FOV bounds should be set in the [-35, +35] degrees range. To cover a wider vertical FOV range, multiple Lidar sensors can be stacked together with various orientations.

  • No modeling of reflection intensity and generally any surface properties other than the geometry itself

  • No modeling of noise or inaccuracies in the acquired data

  • No multi-echo support, as in Hokuyo YVT-35LX

  • No modeling of sine wave scanning patterns, as in Hokuyo YVT-35LX

Simulating specific Lidar Devices

Here is a list of example Lidar devices and the parameters to choose in the Lidar extension for their simulation.

Note that currently, the Vortex Studio Lidar simulation does not model any advanced effects such as noise, reflection intensity, or any other effects caused by material properties of the detected surface. So, the table below will only provide a way to obtain the sampling resolution and frequency of the respective Lidar devices.

Devices

Parameters

Devices

Parameters

Velodyne VLP 16

Number of channels: 16

Range: 100.0

Sampling frequency: 5.0

Horizontal FOV start: 0.0

Horizontal FOV length: 360.0

Vertical FOV upper: 15.0

Vertical FOV lower: -15.0

Step Resolution: 3750

SICK LD-MRS400102 HD

Number of channels: 4

Range: 300

Sampling frequency: 12.5-50

Horizontal FOV start: -42.5

Horizontal FOV length: 85.0

Vertical FOV upper: 3.2

Vertical FOV lower: 0.0

Step Resolution: 170-680

SICK LD-MRS800001S01

Number of channels: 8

Range: 300

Sampling frequency: 12.5-50

Horizontal FOV start: -42.5

Horizontal FOV length: 85.0

Vertical FOV upper: 4.2-6.4

Vertical FOV lower: 0.0

Step Resolution: 170-680

Hokuyo URM-40LC-EW

Number of channels: 1

Range: 60.0

Sampling frequency: 40

Horizontal FOV start: -135.0

Horizontal FOV length: 270.0

Vertical FOV upper: 0.0

Vertical FOV lower: 0.0

Step Resolution: 1080

Hokuyo YVT-35LX (No interlace mode)

Number of channels: 37 (approx.)

Range: 35.0

Sampling frequency: 20

Horizontal FOV start: -105.0

Horizontal FOV length: 210.0

Vertical FOV upper: 35.0

Vertical FOV lower: -5.0

Step Resolution: 70 (approx.)

Moving and Attaching the Lidar Sensor

By default, the Lidar Sensor is created at the origin of the world, but you can move it using the Transform toolbar or the transforms in its Properties panel.

You can connect a Lidar Sensor to an object by linking the Parent Transform input of the Sensor extension to the World Transform output of the object using the Connection Editor.

Data Backends for the Lidar Sensor

We currently only support Lidar data gathering for Vortex Studio simulations built using the in-house Vortex Studio graphics engine or also those built for use in Unreal Engine with the Vortex Studio Plugin for Unreal. For more details on how you can use Unreal Engine to power your Vortex Studio simulation, you can read the available documentation.

Unreal Engine

No additional steps are required to set up the Lidar Sensor in Unreal Engine once it has been set up in your Vortex Studio mechanism. Simply add your Vortex Studio mechanism to an Unreal project and start the simulation. When the simulation starts, there is a small warm-up period (a couple of simulation steps) during which no data is produced. Afterward, the Lidar will start gathering data from the 3D world that you built around it.

For convenience, below you will find a step-by-step procedure explaining how to add a Mechanism containing a Lidar to Unreal and simulate : Vortex Sensors in Unreal Step-by-Step

Python Example

This example shows how to generate an image from the captured distance field in Python using Pillow. Because the default Python interpreter provided by Vortex doesn't include Pillow, make sure to follow these steps to use your own Python environment.

Note that because the sensor processing is done asynchronously, this won't yield any result at the start of the simulation.

import Vortex import PIL.Image import os import struct {...} def saveLidarSensorDistanceField(): # Here, inputLidarSensor is an input extension field to which the depth camera is connected distanceFieldVectorFloat = inputLidarSensor.getOutput("Distance field").toVectorFloat() width = int(inputLidarSensor.getParameter("Horizontal step resolution").value) height = int(inputLidarSensor.getParameter("Number of channels").value) maxRange = inputLidarSensor.getParameter("Range").value # Convert distance field into row-major float values normalized in the [0, 255] range for Pillow. tempBuffer = [0] * width * height column = 0 row = 0 for i in range(0, width * height): normalizedValue = distanceFieldVectorFloat[i] / maxRange * 255 tempBuffer[(width - column - 1) + (height - row - 1) * width] = normalizedValue row += 1 if row == height: row = 0 column += 1 depthImageBuffer = bytes(struct.pack('%sf' % len(tempBuffer), *tempBuffer)) pilImage = PIL.Image.frombytes("F", (width, height), depthImageBuffer) pilImage = pilImage.convert("RGB") # Convert the image into a saveable format # The image is ready to be saved pilImage.save("distanceField.png")

Using these parameters:

  • Number of channels: 128

  • Range: 100m

  • Horizontal FOV Start: -180°

  • Horizontal FOV Length: 360°

  • Vertical FOV upper bound: 20°

  • Vertical FOV lower bound: -20°

  • Horizontal step resolution: 1024

The following image is produced:

Depth Camera

The Depth Camera is a perspective camera that captures the distance of objects within its visual range and provides the captured data as an output.

Parameters

Name

Description

Name

Description

Image Width

Width of the captured depth image.

Image Height

Height of the captured depth image.

Field of View

Vertical field of view of the depth camera.

Framerate

The frequency at which a new depth capture is produced.

Maximum Depth

The maximum range of the depth capture. Any object beyond this will be output as the maximum value.

Normalize Depth to [0, 255]

If enabled, the output depth image data is scaled to a range of 0 to 255. Otherwise, the original depth values are provided.

Inputs

Name

Description

Name

Description

Parent Transform

Parent transformation matrix of the sensor.
Connect the Parent Transform to the World Transform output of some other mobile object (e.g., a Part) in order to make the Depth Camera follow this parent object.

Local Transform

Local transformation matrix of the sensor. Use it to place the sensor relative to its parent object.

The sensor follows the typical Vortex convention: X-forward, Y-left, and Z-up.

Outputs

Name

Description

Name

Description

Depth Image

Depth image resulting from the latest capture.

This is a float array where each float represents a depth value from 0 to the "Maximum Depth" value, in meters. The order is row-major and the first value is mapped to the lower-left corner of the image.

Vulkan Renderer

In the Vortex Editor using the Vulkan renderer, a preview pane will appear in the bottom right corner of the screen when selecting a Depth Camera.

Unreal Engine

No additional steps are required to set up the Depth Camera in Unreal Engine once it has been set up in your Vortex Studio mechanism. Simply add your Vortex Studio mechanism to an Unreal project and start the simulation (the Unreal project must also have been set up correctly as described in Vortex Studio Plugin for Unreal). When the simulation starts, there is a small warm-up period (a couple of simulation steps) during which no data is produced. Afterward, the Depth Camera will start producing captures.

Python Example

This example shows how to save the captured depth image in Python using Pillow. Because the default Python interpreter provided by Vortex doesn't include Pillow, make sure to follow these steps to use your own Python environment.

The following example is in Python 3. 

Note that because the sensor processing is done asynchronously, this won't yield any result at the start of the simulation.

import Vortex import PIL.Image import os import struct {...} def saveDepthCameraTexture(): # Here, inputDepthCameraExtension is an input extension field to which the depth camera is connected depthImageVectorFloat = inputDepthCameraExtension.getOutput("Depth Image").toVectorFloat() width = inputDepthCameraExtension.getParameter("Image Width").value height = inputDepthCameraExtension.getParameter("Image Height").value maxDepth = inputDepthCameraExtension.getParameter("Maximum Depth").value # Vortex outputs depths in meters, while PIL expects values in the [0, 255] range. temp = [] for i in range(0, width * height): temp.append(depthImageVectorFloat[i] / maxDepth * 255) depthImageBuffer = bytes(struct.pack('%sf' % len(temp), *temp)) pilImage = PIL.Image.frombytes("F", (width, height), depthImageBuffer) pilImage = pilImage.convert("RGB") # Convert the image into a saveable format # The image is ready to be saved pilImage.save("depthImage.png")

Color Camera

The Color Camera is a perspective camera that captures the world and makes the result available for reading.

Parameters

Name

Description

Name

Description

Image Width

Width of the captured image.

Image Height

Height of the captured image.

Field of View

Vertical field of view of the color camera.

Framerate

The frequency at which a new capture is produced.

Inputs

Name

Description

Name

Description

Parent Transform

Parent transformation matrix of the sensor.
Connect the Parent Transform to the World Transform output of some other mobile object (e.g., a Part) in order to make the Color Camera follow this parent object.

Local Transform

Local transformation matrix of the sensor. Use it to place the sensor relative to its parent object.

The sensor follows the typical Vortex convention: X-forward, Y-left, and Z-up.

Vulkan Renderer

In the Vortex Editor using the Vulkan renderer, a preview pane will appear in the bottom right corner of the screen when selecting a Color Camera.

Unreal Engine

No additional steps are required to set up the Color Camera in Unreal Engine once it has been set up in your Vortex Studio mechanism. Simply add your Vortex Studio mechanism to an Unreal project and start the simulation. When the simulation starts, there is a small warm-up period (a couple of simulation steps) during which no data is produced. Afterward, the Color Camera will start producing output data.

Python Example

This example shows how to save the captured image in Python using Pillow. Because the default Python interpreter provided by Vortex doesn't include Pillow, make sure to follow these steps to use your own Python environment.

The following example is in Python 3.

Note that because the sensor processing is done asynchronously, this won't yield any result at the start of the simulation.

 

 

Â