The pylon Programmer's Guide is a quick guide on how to program using the Basler pylon C++ API. It can be used together with the pylon sample code for getting started. Additionally, the API reference provides documentation about the Basler pylon C++ interface. The documentation of the interfaces is also available in the header files of pylon.
This section shows the most common Linux build settings for building an application using pylon and the GNU tool chain. Consult the Advanced Topics section for more information.
To collect all the required parameters to build a pylon-based application, we created the pylon-config
utility. It works like pkg-config
and you can call pylon-config --help
to get a list of supported parameters.
In a typical GNU Make-based project you can add the following lines to your Makefile:
If needed, you can now overwrite the default installation path using the environment variable PYLON_ROOT. E.g.:
When debugging a pylon application using GigE cameras you may encounter heartbeat timeouts. The application must send special network packets to the camera in defined intervals. If the camera doesn't receive these heartbeats it will consider the connection as broken and won't accept any commands from the application. This requires setting the heartbeat timeout of a camera to a higher value when debugging. The Advanced Topics section shows how to do this.
The pylon runtime system must be initialized before use. A pylon-based application must call the PylonInitialize()
method before using any other functions of the pylon runtime system. Before an application exits, it must call the PylonTerminate()
method to free resources allocated by the pylon runtime system.
The Pylon::PylonAutoInitTerm
convenience class helps to do the above. The constructor of PylonAutoInitTerm
calls PylonInitialize()
, the destructor calls PylonTerminate()
. This ensures that the pylon runtime system is initialized during the lifetime of an object of the PylonAutoInitTerm
type.
Example:
In the case of errors, the methods of pylon classes may throw C++ exceptions. The pylon C++ API throws exceptions of type GenericException
or that are subclasses of GenericException. You should guard pylon calls with exception handlers catching GenericException
. Example:
In pylon, physical camera devices are represented by pylon Devices. The following example shows how to create a pylon Device:
The first found camera device is created, e.g. for a vision system that uses only one camera. The Advanced Topics section shows how to handle multiple camera devices and how to find specific camera devices.
Instant Camera classes make it possible to grab images with a few lines of code reducing the programming effort to a minimum. An Instant Camera class uses a pylon Device internally. The pylon Device needs to be created and attached to the Instant Camera object for operation.
Example:
The above code snippet can be found in the code sample Grab.
The Instant Camera classes establish convenient access to a camera device while being highly customizable. The following list shows the main features of an Instant Camera class:
Before starting to program you need to decide what Instant Camera class to use. The following table shows the available Instant Camera classes:
Name of Class | Usable for Device Type | Device Specific |
---|---|---|
Pylon::CInstantCamera | All cameras | No |
Pylon::CBaslerGigEInstantCamera | GigE Vision compliant cameras | Yes |
Pylon::CBaslerUsbInstantCamera | USB3 Vision compliant cameras | Yes |
The generic CInstantCamera
camera class allows to operate camera devices of all types. The class should be used when programming for multiple device types, e.g. GigE and IIDC 1394 devices, and if the type of device used is not known at compile time.
The Device Specific Instant Camera classes are specializations of CInstantCamera
extending it by a parameter class. The parameter class provides a member for each camera parameter. One of these Device Specific Instant Camera classes should be used when programming for only one device type, e.g. for GigE devices.
The Instant Camera classes allow to register event handler objects for customizing the behavior of the camera object, for processing grab results, and for handling camera events. The following list shows the available event handler types.
Pylon::CConfigurationEventHandler
base class. Pylon::CImageEventHandler
base class. Pylon::CCameraEventHandler
base class.A custom event handler class can override one or more virtual methods of the base class. For instance whenever Pylon::CConfigurationEventHandler::OnOpened
is called by an Instant Camera object it is the right time to set up camera parameters. The following code snippet shows an example of a configuration event handler setting the image area of interest (Image AOI) and the pixel format.
One or more event handler objects can be registered at the Instant Camera object. The following code snippet shows an example of how the above event handler is registered and appended to the configuration event handler list.
For more information about registering and deregistering event handlers consult the interface documentation of the Instant Camera class used and the codes of the following samples: ParametrizeCamera_Configurations, Grab_UsingGrabLoopThread, and Grab_CameraEvents.
Configuration event handler classes are also just called "configurations" because they encapsulate certain camera configurations. The pylon C++ API comes with the following configuration classes:
Pylon::CAcquireSingleFrameConfiguration
- for single frame acquisition mode. Pylon::CAcquireContinuousConfiguration
- for continuous frame acquistion mode. Pylon::CSoftwareTriggerConfiguration
- for software trigger mode. Pylon::CActionTriggerConfiguration
- for triggers using action commands (applies to GigE Vision only)These classes are provided as header files. This makes it possible to see what parameters of the camera are changed. The code can be copied and modified for creating own configuration classes. Pylon::CSoftwareTriggerConfiguration
for instance can be used as basis for creating a hardware trigger configuration with few modifications. Pylon::CAcquireContinuousConfiguration
is already registered when creating an Instant Camera class, providing a default setup that will work for most cameras.
The following example shows how to apply the software trigger configuration:
The code sample ParametrizeCamera_Configurations provides more examples showing the use of configurations.
The Instant Camera Array classes help managing multiple cameras in a system. An Instant Camera Array represents an array of instant camera objects. It provides almost the same interface as an Instant Camera for grabbing. The main purpose of the CInstantCameraArray is to simplify waiting for images and camera events of multiple cameras in one thread. This is done by providing a single RetrieveResult method for all cameras in the array. The following classes are available:
Pylon::CInstantCameraArray
- used if the type of camera device used is not known at compile time. Pylon::CBaslerGigEInstantCameraArray
- used together with GigE camera devices. Pylon::CBaslerUsbInstantCameraArray
- used together with USB camera devices.The Grab_MultipleCameras code sample illustrates the use of the CInstantCameraArray
class.
For camera configuration and for accessing other parameters, the pylon API uses the technologies defined by the GenICam standard. The GenICam specification (http://www.GenICam.org) defines a format for camera parameter description files. These files describe the configuration interface of GenICam compliant cameras. The description files are written in XML (eXtensible Markup Language) and describe camera registers, their interdependencies, and all other information needed to access high-level features such as Gain, Exposure Time, or Image Format by means of low level register read and write operations.
The elements of a camera description file are represented as software objects called Nodes. For example, a node can represent a single camera register, a camera parameter such as Gain, a set of available parameter values, etc. Each node implements the GenApi::INode
interface.
The nodes have different types. For example, there are nodes representing integer values and other nodes representing strings. For each type of parameter, there is an interface in GenApi. These interfaces are described in the Parameter Types section. The Access Modes for Parameters section introduces the concept of a parameter access mode. An access mode property is used to determine whether a parameter is available, readable, or writable.
The complete set of nodes is stored in a data structure called node map.
Before reading or writing parameters of a camera, the drivers involved must be initialized and a connection to the physical camera device must be established. This is done by calling the Open()
method. The camera can be closed using the Close()
method.
pylon provides programming interface classes that are created from parameter description files using the code generators provided by GenICam's GenApi module. Such programming interface classes are exported by the Device Specific Instant Camera classes. Thereby, a member is provided for each available parameter. This is the easiest way to access parameters.
Example:
The ParametrizeCamera_NativeParameterAccess code sample shows how to access parameters via generated parameter classes.
The complete set of nodes is stored in a data structure called node map. At runtime, a node map is instantiated from an XML description. The parameters or nodes must be accessed using a node map object when the type of device used is not known at compile time. This applies, for example, when it is possible that multiple types of camera devices are used by a machine vision application, for example GigE and IIDC 1394 devices.
Example (setting the same parameters as in the above example):
The ParametrizeCamera_GenericParameterAccess code sample shows how to use the generic parameter access.
The following sources can be used to get information about camera parameters:
The GenApi::IInteger
interface is used to access integer parameters. An integer parameter represents a feature that can be set by an integer number, such as a camera's image width or height in pixels. The current value of an integer parameter is augmented by a minimum and a maximum value, defining a range of allowed values for the parameter, and by an increment that acts as a 'step width' for changes to the parameter's value. The set of all allowed values for an integer parameter can hence be expressed as x := {minimum} + N * {increment}, with N = 0,1,2 ..., x <= {maximum}. The current value, minimum, maximum, and increment can all be accessed as 64 bit values. The following example prints all valid values for the Width parameter:
There are two equivalent possibilities for setting a value using the GenApi::IInteger
interface:
SetValue()
method, e.g.: There are also two equivalent ways to get a parameter's current value:
Floating point parameters are represented by GenApi::IFloat
objects. A float parameter represents a feature that can be set by a floating-point value, such as a camera's exposure time expressed in seconds. The floating point parameter is similar to the integer parameter with two exceptions: all values are of the 'double' type (double precision floating point numbers as defined by the IEEE 754 standard) and there is no increment value. Hence, a float parameter is allowed to take any value from the interval {minimum} <= x <={maximum}.
A boolean parameter represents a binary-valued feature, which, can be enabled or disabled. It is represented by the GenApi::IBoolean
interface. An example for a boolean parameter would be a 'switch' to enable or disable a particular feature, such as a camera's external trigger input. Set and get operations are similar to the ones used by the GenApi::IInteger
interface.
The GenApi::IEnumeration
interface is used to represent camera parameters that can take any value from a predefined set. Parameters such as Pixel Format or Test Image Type may serve as examples.
Example:
Command parameters (GenApi::ICommand
) are used for parameters that trigger an action or an operation inside of the camera, e.g. issuing a software trigger. The action is issued by calling the GenApi::ICommand::Execute()
method. The GenApi::ICommand::IsDone()
method can be used to determine whether a running operation has finished.
The GenApi::IString
interface provides access to string parameters. The GenICam::gcstring
class is used to represent strings. The gcstring
class is similar to the STL std::string
class.
Each parameter has an access mode that describes whether a feature is implemented, available, readable, and writable. For a given camera, a feature may not be implemented at all. For example, a monochrome camera will not include a white balance feature. Depending on the camera's state, a feature may temporarily not be available. For example, a parameter related to external triggering may not be available when the camera is in free run mode. Available features can be read-only, write-only, or readable and writable.
The current state of a parameter can be queried by calling the parameter's GetAccessMode()
method, which returns a GenApi::EAccessMode
enumeration value described in the following table:
EAccessMode | Implemented | Available | Readable | Writable |
---|---|---|---|---|
NI | No | No | No | No |
NA | Yes | No | No | No |
WO | Yes | Yes | No | Yes |
RO | Yes | Yes | Yes | No |
RW | Yes | Yes | Yes | Yes |
A parameter's access mode can change at runtime. For example, the AOI
width
and AOI
height
will be read-only when the camera is acquiring images. So checking a parameter's access mode before using it is recommended. Therefore, the GenApi
namespace contains the following helper functions taking either an EAccessMode
value, a pointer, or a reference to a parameter:
Example:
This section shows how to grab images using the Instant Camera class. Before grabbing images the camera parameters must be set up using one or more of the following approaches:
Open()
is called. In this document we distinguish between image acquisition, image data transfer, and image grabbing.
We denote the processes inside the camera as image acquisition. When a camera starts image acquisition, the sensor is exposed. When exposure is complete, the image data is read out from the sensor.
The acquired image data is transferred from the camera's memory to the PC using an interface such as IEEE 1394 or Gigabit Ethernet.
The process of writing the image data into the PC's main memory is referred to as "grabbing" an image.
The data of a grabbed image is held by a Grab
Result
Data
object. The Grab Result Data object cannot be directly accessed. It is always held by a grab result smart pointer, e.g. the basic grab result smart pointer CGrabResultPtr
. The combination of smart pointer and Grab Result Data object is also referred to as grab result. The smart pointer controls the reuse and the lifetime of the Grab Result Data object and the associated image buffer. When all smart pointers referencing a Grab Result Data object go out of scope, the grab result's image buffer is reused for grabbing. Due to the smart pointer concept, a Grab Result Data object and the associated image buffer can live longer than the camera object used for grabbing the image data. Each Device Specific Instant Camera class has a device specific Grab Result Data object and a device specific grab result smart pointer. A device specific grab result smart pointer can be converted to or from the basic grab result smart pointer CGrabResultPtr by copying or assigning.
The grab result smart pointer classes provide a cast operator that allows passing a grab result smart pointer directly to functions or methods that take a const
Pylon::IImage&
as parameter, e.g. image saving functions.
IImage
is only valid as long the grab result smart pointer it came from is not destroyed.New buffers are automatically allocated for each grab session starting with StartGrabbing()
. The buffer of a grabbed image is held by the Grab Result Data object. While the grabbing is in progress a buffer is reused when the Grab Result Data object is released by the grab result smart pointer. If the Grab Result Data object is released when the grabbing has stopped then the buffer is freed.
The number of used image data buffers can be set using the MaxNumBuffer
parameter. The default amount of buffers used for grabbing is 10. The buffer size required for grabbing is automatically determined.
The number of allocated buffers is automatically reduced when grabbing a defined number of images smaller than the value of MaxNumBuffer
, e.g. 5.
The Instant Camera grab engine consists of an empty buffer queue, an output queue, and a grab thread. The grab engine uses a Low Level API stream grabber to grab images. The empty buffer queue and the output queue can hold the number of buffers defined by the MaxNumBuffer parameter. MaxNumQueuedBuffer buffers are passed to the Low Level API stream grabber at any time. All queues work in FIFO mode (First-In-First-Out). The grab engine thread ensures that the stream grabber does not run out of buffers as long as buffers are available in the empty buffer queue.
The Instant Camera supports different grab strategies. The default strategy is One By One. When the grab strategy One By One is used images are processed in the order of their acquisition.
More information about grab strategies can be found in the advanced topics section.
The following example shows a simple grab loop:
The camera object is created. The grabbing is started by calling StartGrabbing()
. Since the camera is not open yet, it is automatically opened by the StartGrabbing()
method. The default configuration event handler is called and it applies the default configuration. Images are grabbed continuously by the Instant Camera object and the grab results are placed into the Instant Camera's output queue in the order they are aqcuired by the camera (Grab Strategy One By One). The RetrieveResult()
method is used to wait for a grab result and for retrieving it from the output queue. Some of the grab result data is printed to the screen after it is retrieved. StopGrabbing()
is called automatically by the RetrieveResult()
method when c_countOfImagesToGrab
images have been retrieved. The while statement condition is used to check if the grabbing has been stopped.
It is possible to start grabbing for an unlimited number of images by omitting the maximum number of images to grab in the StartGrabbing()
call and call StopGrabbing()
from inside the grab loop to finish grabbing.
The above code snippet can be found in the Grab code sample.
The Instant Camera class can optionally provide a grab loop thread. The thread runs a grab loop calling RetrieveResult()
repeatedly. When using the provided grab loop thread an image event handler is required to process the grab results.
The following image event handler is used:
The following example shows how to grab using the grab loop thread provided by the Instant Camera object:
First, the image event handler is registered. It prints a message on the screen for every grabbed image. It serves as image processing in this example. The grabbing is started using StartGrabbing()
for an unlimited number of images and uses the grab loop thread provided by the Instant Camera object by setting the second parameter to GrabLoop_ProvidedByInstantCamera. The main thread can now be used to query the user for input to either trigger an image or to exit the input loop. The grabbing is not explicitly stopped in this example and could be stopped by calling StopGrabbing()
. The above code snippet can be found in the Grab_UsingGrabLoopThread code sample.
For convenience, the GrabOne()
method can be used to grab a single image. The following code shows a simplified version of what is done:
GrabOne()
is more efficient if the pylon Device is already open, otherwise the pylon Device is opened and closed automatically for each call.CSoftwareTriggerConfiguration
) is recommended if you want to maximize the frame rate. This is because the overhead per grabbed image is reduced compared to Single Frame Acquisition. The grabbing can be started using StartGrabbing()
. Images are grabbed using the WaitForFrameTriggerReady()
,ExecuteSoftwareTrigger()
and RetrieveResult()
methods instead of using GrabOne()
. The grabbing can be stopped using StopGrabbing()
when done.Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. For example, when a sensor exposure has finished, the camera can send an end-of-exposure event to the PC. Events can be received by registering an image event handler at an Instant Camera class. See the Advanced Topics section for more information.
Basler Cameras are able to send additional information as so-called data chunks appended to the image data, such as frame counters, time stamps, or CRC checksums. Data chunks are automatically parsed by the Instant Camera class if activated. See the Advanced Topics section for more information.
To get informed about camera device removal the method IsCameraDeviceRemoved()
can be queried or a configuration event handler can be registered. The virtual method OnCameraDeviceRemoved()
is called if a camera device is removed. See the Advanced Topics section for more information.
OnCameraDeviceRemoved
call is made from a separate thread.Basler GigE cameras can be configured to send the image data stream to multiple destinations. Either IP multicasts or IP broadcasts can be used. For more information consult the advanced topics section.
Besides the Instant Camera classes used for grabbing images pylon offers additional Image Handling Support support for handling grabbed images. There are an image class, an image format converter, and the loading and saving of images.
When working with image data the handling of buffer size and lifetime often involves a lot of coding. The Pylon::CPylonImage
class simplifies this. It also allows to attach a buffer of a grab result preventing its reuse as long as required. Additionally, user buffers or buffers provided by third party software packages can be connected. Besides that, the pylon Image class helps when working with image planes or AOIs. The Utility_Image code sample shows the use of the pylon Image class.
The Pylon::CImageFormatConverter
creates new images by converting a source image to a different format. Once the format converter is configured it can convert almost all image formats supported by Basler camera devices. The Utility_ImageFormatConverter code sample shows the use of the Image Format Converter class.
The Pylon::CImagePersistence
class supports loading and saving images to disk. The pylon Image classes use this interface internally and provide methods for loading and saving. The Utility_ImageLoadAndSave code sample shows how to load and save images.