The advanced topics section can be consulted if more information about special use cases is required.
This section gives a short introduction to the most important concepts of the pylon C++ API.
The term 'transport layer' is used as an abstraction for a physical interface such as IEEE 1394, GigE or Camera Link. For each of these interfaces, there are drivers providing access to camera devices. pylon currently includes four different transport layers:
Transport Layer objects are device factories and are used to:
An application program does not access transport layer implementations directly. The Transport Layer Factory is used to create Transport Layer objects, each of which represents a transport layer. Additionally, the Transport Layer Factory can be used as a device factory to create and destroy pylon Devices for all transport layers.
In pylon, physical camera devices are represented by pylon Devices. pylon Devices are only directly used when programming against the Low Level API.
An Instant Camera provides convenient access to a camera device while being highly customizable. It allows to grab images with few lines of code providing instant access to grabbed images from a camera device. Internally a pylon Device is used. A pylon Device needs to be created and attached to the Instant Camera object for operation. Additional Device Specific Instant Camera classes provide more convenient access to the parameters of the camera. Furthermore, the Instant Camera Array classes eases programming for image grabbing from multiple camera devices.
For camera configuration and for accessing other parameters, the pylon API uses the technologies defined by the GenICam standard hosted by the European Machine Vision Association (EMVA). The GenICam specification (http://www.GenICam.org) defines a format for camera description files. These files describe the configuration interface of GenICam compliant cameras. The description files are written in XML (eXtensible Markup Language) and describe camera registers, their interdependencies, and all other information needed to access high-level features such as Gain
, Exposure
Time
, or Image
Format
by means of low level register read and write operations.
The elements of a camera description file are represented as software objects called Nodes. For example, a node can represent a single camera register, a camera parameter such as Gain, a set of available parameter values, etc. Each node implements the GenApi::INode
interface.
The nodes are linked together by different relationships as explained in the GenApi standard document available at www.GenICam.org. The complete set of nodes is stored in a data structure called node map. At runtime, a node map is instantiated from an XML description.
Using the code generators provided by GenICam's GenApi module, a programming interface is created from a camera description file. This interface is represented as a parameter class, that has a member for each camera parameter. The members are references (or handles) to the GenApi Nodes representing the parameters. Such a parameter class can represent the parameters of a camera device for a certain transport layer, e.g. GigE.
In pylon, node maps are not only used to represent camera device parameters. Parameters of other pylon objects such as Transport Layer objects or the Image Format Converter are also exposed via GenApi node maps.
Examples:
Pylon::CInstantCamera
class has the Pylon::CInstantCamera::GetNodeMap()
method, which returns the node map containing all GenApi nodes representing the whole set of camera parameters. Pylon::CImageFormatConverter::GetNodeMap()
method is used to access the Image Format Converter's parameters.Besides the Instant Camera classes used for grabbing images pylon offers additional Image Handling Support support for handling grabbed images. There are an image class, an image format converter, and the loading and saving of images.
A Low Level API camera object wraps a pylon Device and provides more convenient access to the parameters of the camera, the stream grabber, the event grabber, and Transport Layer using GenApi parameter classes. The low level camera classes are replaced by the Device Specific Instant Camera classes.
The pylon architecture allows a camera object to deliver one or more streams of image data. To grab images from a stream, a Stream Grabber object is required. Stream Grabber objects cannot be created directly by an application. They are managed by Camera objects, which create and pass out Stream Grabbers. All Stream Grabbers implement the Pylon::IStreamGrabber interface. This means that for all transport layers, images are grabbed from streams in exactly the same way. More details about grabbing images are described below in the Grabbing Images section for the Low Level API.
Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. Event Grabber objects are used to receive event messages. How to retrieve and process event messages is described below in the Handling Camera Events section for the Low Level API.
If the so-called Chunk Mode is activated, Basler Cameras can send additional information appended to the image data. When in Chunk Mode, the camera sends an extended data stream including the image data combined with added information such as a frame number or a time stamp. The extended data stream is self-descriptive. pylon Chunk Parser objects are used for parsing the extended data stream and for providing access to the added information. Chunk Parser objects are described in the Chunk Parser: Accessing Chunk Features section for the Low Level API.
As described in Building Applications with pylon, you can use the pylon-config
utility to get all the parameters required to build applications based on pylon.
Executables using the pylon API must be linked using the -Wl,-E
compiler option. This option is automatically provided by pylon-config --libs
. It ensures that run-time type information for pylon-defined types is processed properly. This is essential for proper exception handling and dynamic casting of objects.
At runtime, the dynamic linker must know where it can find the pylon libraries. In pylon versions prior to 5.0 this was handled by extending the environment variable LD_LIBRARY_PATH. This works, but it is a rather cumbersome and error prone.
From pylon 5 upwards, we use the rpath feature of the GNU linker to specify the runtime location of pylon. You can find more details about rpath and runpath at https://en.wikipedia.org/wiki/Rpath.
There are two cases to distinguish when you want to build an application with rpath support:
PYLON_ROOT
points to your pylon installation (e.g. /opt/pylon5).--libs-rpath-link
option to pylon-config
When debugging a pylon application using GigE cameras you may encounter heartbeat timeouts. The application must send special network packets to the camera in defined intervals. If the camera doesn't receive these heartbeats it will consider the connection as broken and won't accept any commands from the application.
When you run your application pylon will normally generate these heartbeats. When you set a breakpoint in your application and the breakpoint is hit, the debugger will suspend all threads including the one sending the heartbeats. So when you debug your application and single step through your code no heartbeats are sent to the camera.
To work around this you have to extend the heartbeat timeout during development. You can do this by setting an environment variable named PYLON_GIGE_HEARTBEAT which will instruct pylon to set the heartbeat interval when opening the camera or you can set the heartbeat timeout value named "HeartbeatTimeout" of the camera transport layer in your code. To set the environment variable set an environment variable named PYLON_GIGE_HEARTBEAT and set its value to the desired timeout in milliseconds. To set the Heartbeat timeout in code set the value on the HeartbeatTimeout
node of the transport layer:
pylon offers two ways to enumerate and create pylon Devices. The first approach uses the Transport Layer Factory to enumerate cameras across multiple transport layers. The second approach lets a Transport Layer object enumerate and create pylon Devices for a specific transport layer. Before describing the different enumeration schemes, the terms Device Class and Device Info object are introduced.
Each transport layer can create a specific type of pylon Device. For example, the PylonGigE transport layer will create pylon Devices representing GigE Vision cameras. Each type of device is associated with a unique identifier string called Device Class. The device class identifier can be found in the DeviceClass.h header file.
The device enumeration procedure returns a list of Device Info objects. The base class for Device Info objects is Pylon::CDeviceInfo
. A Device Info object uniquely describes a camera device. Device Info objects are used by a Transport Layer and the Transport Layer Factory to create camera objects representing the device described by the Device Info objects.
A Pylon::CDeviceInfo
object stores a set of string properties. The data type of the values is Pylon::String_t
. The following properties are available for all Device Info Objects:
Name | Description |
---|---|
FriendlyName | A human readable name for the device (e.g. the camera's model name). Friendly names are not unique. |
FullName | A unique identifier for the device. No two devices will have the same full name. |
VendorName | The name of the vendor. |
DeviceClass | Each transport layer can create a specific type (or class) of camera devices (e.g. IIDC 1394 or GigE Vision devices). The device types are identified by the Device Class property. |
SerialNumber | The device's serial number. The availability of the device serial number is not guaranteed during the enumeration process, so the Serial Number Property may be undefined. |
UserDefinedName | For some device classes, it is possible to assign a user defined name to a camera device. The value of this property is not necessarily unique. |
DeviceFactory | The unique full name of the Transport Layer object that can create the device. |
In addition, specific transport layers will require additional properties. These properties can be accessed in a generic way by using the Pylon::IProperties
interface.
A more comfortable way is downcasting a Device Info object to the concrete class. This is illustrated in the following example, which prints out the IP address of a GigE Device Info object:
The Pylon::CTlFactory::EnumerateDevices()
method is used to retrieve a list of all available devices, regardless of which transport layer is used to access the device. The list contains Device Info objects that must be used for creating Camera objects.
The returned lists are of the Pylon::DeviceInfoList_t
type and are used similarly to the Standard Template Library (STL) std::vector
class.
The following example prints out the unique names of all connected devices:
The Transport Layer Factory provides a Device Info object and can be used to create Camera objects. The following example illustrates how to create a Camera object for the first element in the device list:
free
or delete
on a Pylon::IPylonDevice
pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::DestroyDevice()
method to delete an IPylonDevice pointer.A list of all available transport layers can be retrieved by calling the Pylon::CTlFactory::EnumerateTls()
method. This method fills a list with Transport Layer Info objects (Pylon::CTlInfo
). The data structures are very similar to Device Info objects. Transport Layer Info objects are used as arguments for the Pylon::CTlFactory::CreateTl()
method that creates Transport Layer objects. The method returns a pointer of the Pylon::ITransportLayer
type.
free
or delete
on a ITransportLayer pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::ReleaseTl()
method to free Transport Layer objects.Transport Layer objects can be used to enumerate all devices accessible by a specific transport layer. Transport Layer objects are created by the Transport Layer Factory. This is illustrated in the following example, which creates a Transport Layer object for the PylonGigE transport layer:
As described above, Transport Layer objects can also be created by passing in a Transport Layer Info object.
The Transport Layer Object is now used for enumerating all of the devices it can access:
Pylon::ITransportLayer::EnumerateDevices
adds the discovered devices to the passed-in Device Info List.
The Transport Layer object is now used for creating a Camera object. In the following example, a Camera Object for the first enumerated camera device is created:
free
or delete
on a Pylon::IPylonDevice
pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::DestroyDevice()
method to delete an IPylonDevice pointer.For enumerating a range of devices that have certain properties the EnumerateDevices method applying a filter can be used. To define the properties a filter list with device info objects can be passed. A camera is enumerated if it has the properties of at least one device info object in the filter list. The following example enumerates all cameras with the model names in the filter list.
For creating a specific device an info object must be set up with the properties of the desired device. In the following example the serial number and the device class are used for identifying the camera. Specifying the device class limits the search to the correct transport layer. This saves computation time when using the transport layer factory.
The above example can also be written in one line:
The CreateDevice method will fail when multiple devices match the provided properties. If it is required to create any one of multiple devices the CreateFirstDevice method can be used.
The following sample illustrates how to create a device object for a GigE camera with a specific IP address:
The following grab strategies involve the triggering of the camera device. Depending on the cofiguration of the camera device the following trigger modes are supported.
Additional information regarding this topic can be found in the code sample Grab_Strategies and in the parameter documentation of the Instant Camera .
When the One by One grab strategy is used images are processed in the order of their acquisition.
The Latest Image Only grab strategy differs from the One By One grab strategy by the size of the Output Queue. The size of the output queue is only 1 buffer. If a new buffer has been grabbed and there is already a buffer waiting in the Output Queue then the buffer waiting in the output queue is automatically returned to the Empty Buffer Queue (4.1). The newly filled buffer is then placed into the output queue. This assures that always the latest grabbed image is provided to the application. Images that are automatically returned to the Empty Buffer Queue are called skipped images.
The Latest Images strategy extends the above strategies. It allows the user to adjust the size of Output Queue by setting CInstantCamera::OutputQueueSize
. If a new buffer has been grabbed and the output queue is full, the first buffer waiting in the output queue is automatically returned to the Empty Buffer Queue (4.1). The newly filled buffer is then placed into the output queue. This ensures that the application is always provided with the latest grabbed images. Images that are automatically returned to the Empty Buffer Queue are called skipped images. When setting the output queue size to 1, this strategy is equivalent to Latest Image Only grab strategy. When setting the output queue size to CInstantCamera::MaxNumBuffer, this strategy is equivalent to One By One grab strategy.
The Upcoming Image grab strategy can be used to make sure to get an image that has been grabbed after RetrieveResult() has been called.
To get informed about camera device removal the IsCameraDeviceRemoved()
method can be queried or a configuration event handler can be registered. The virtual OnCameraDeviceRemoved()
method is called if a camera device is removed. The device removal is only detected while the Instant Camera and therefore the attached pylon Device are open. The attached pylon Device needs to be destroyed after a device removal. This can be done using the DestroyDevice()
method.
The following is an example of a configuration event handler that is handling camera device removal:
The following example shows how a device removal is detected while the camera is accessed in a loop. The IsCameraDeviceRemoved()
method can be used to check whether the removal of the camera device has caused an exception while accessing the camera device, e.g. for grabbing.
The above code snippets can be found in the code of the DeviceRemovalHandling sample.
OnCameraDeviceRemoved
call is made from a separate thread.Basler Cameras can send additional information appended to the image data, such as frame counters, time stamps, and CRC checksums. Data chunks are automatically parsed by the Instant Camera class if activated. The following example shows how to do this using a Device Specific Instant Camera class.
The chunk data can be accessed via parameter members of the device specific grab result data class or using the provided chunk data node map (not shown).
The above code snippets can be found in the code of the Grab_ChunkImage sample.
Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. For example, when a sensor exposure has finished, the camera can send an Exposure End event to the PC. The event can be received by the PC before the image data for the finished exposure has been completely transferred. This is e.g. useful for avoiding unnecessary delay by moving an imaged object further only before the related image data transfer is complete.
The event messages are automatically retrieved and processed by the InstantCamera classes. The information carried by event messages is exposed as nodes in the camera node map and can be accessed like "normal" camera parameters. These nodes are updated when a camera event is received. You can register camera event handler objects that are triggered when event data has been received.
The following camera event handler is used in the camera event example below, which prints the event data on the screen.
Handling camera events is disabled by default and needs to be activated first:
To register a camera event handler the name of the event data node updated on a camera event and a user provided ID need to be passed. The user provided ID can be used to distinguish different events handled by the same event handler.
The event of interest must be enabled in the camera. Events are then handled in the RetrieveResult()
call while waiting for images.
The above code snippets can be found in the code of the Grab_CameraEvents sample.
The GenICam API provides the functionality for installing callback functions that will be called when a parameter's value or state (e.g. the access mode or value range) have been changed. It is possible to either install a C function or a C++ class member function as a callback.
Each callback is installed for a specific parameter. If the parameter itself has been touched or if another parameter that can influence the state of the parameter has been changed, the callback will be fired.
The following example demonstrates how to install callbacks for the Width
parameter:
CCameraEventHandler::OnCameraEvent()
method. Using a Camera Event Handler can be more convenient. See the Grab_CameraEvents sample for more information about how to register a Camera Event Handler.A buffer factory can be attached to an Instant Camera object for using user provided buffers. The use of a buffer factory is optional and intended for advanced use cases only. The buffer factory class must be derived from Pylon::IBufferFactory. An instance of a buffer factory object can be attached to an instance the Instant Camera class by calling SetBufferFactory()
. Buffers are allocated when StartGrabbing is called. A buffer factory must not be deleted while it is attached to the camera object and it must not be deleted until the last buffer is freed. To free all buffers the grab needs to be stopped and all grab results must be released or destroyed. The Grab_UsingBufferFactory code sample illustrates the use of a buffer factory.
Basler GigE cameras can be configured to send the image data stream to multiple destinations. Either IP multicasts or IP broadcasts can be used.
When multiple applications on different PCs expect to receive data streams from the same camera, one application is responsible for configuring the camera and for starting and stopping the data acquisition. This application is called the controlling application. Other applications that also expect to receive the data stream are called monitoring applications. These applications must connect to the camera in read-only mode, and can read all camera parameters but can not change them.
Device enumeration and device creation is identical for the controlling and the monitoring application. Each application type must create a Camera Object for the camera device from which it will recieve data. The multicast device creation is realized in the same way as for unicast setups (see earlier explanation).
Example of the configuration of an Instant Camera to act as monitor:
When using the Low Level API, the parameters passed to the Camera Object's Open() method determine whether an application acts as controlling or as monitoring application. The following code snippet illustrates how a monitoring application must call the Open() method:
When using the low level API the controlling application can either call the Open() method without passing in any arguments (the default parameters for the Open() method make sure that the device will be opened in control and stream mode), or can specify the access mode for the Open() method explicitly:
It is important that the controlling application does not set the Exclusive flag for the access mode. Using the Exclusive
flag would prevent monitoring applications from accessing the camera at all. When the controlling application also wants to receive camera events, the Events
flag must be added to the access mode parameter.
The controlling application and the monitoring application must create Stream Grabber objects in the same way as is done in unicast setups. Configuring the Stream Grabber for multicasts or broadcasts is explained in the next sections.
The TransmissionType parameter of the GigE Stream Grabber class can be used to configure whether the camera sends the data stream to a single destination or to multiple destinations.
When the camera sends the image data using limited broadcasts, where the camera sends the data to the address 255.255.255.255, the data is sent to all devices in the local network. 'Limited' means, that the data is not sent to destinations behind a router, e.g. to computers in the internet. To enable limited broadcasts, the controlling application must set the TransmissionType parameter to TransmissionType_LimitedBroadcast
. The camera sends the data to a specific port. See the Selecting a Destination Port section for setting up the destination port.
When the camera sends the image data using subnet directed broadcasts, the camera sends the data to all devices that are in the same subnet as the camera. To enable subnet directed broadcasts, set the TransmissionType parameter to TransmissionType_SubnetDirectedBroadcast
. See the Selecting a Destination Port section for setting up the destination port that receives the data from the camera.
The disadvantage of using broadcasts is that the camera sends the data to all recipients in a network, regardless of whether or not the devices need the data. The network traffic causes a certain CPU load and consumes network bandwidth even for the devices not needing the streaming data.
When the camera sends the image data using multicasts, the data is only sent to those devices that expect the data stream. A device claims its interest in receiving the data by joining a so-called multicast group. A multicast group is defined by an IP address taken from the multicast address range (224.0.0.0 to 239.255.255.255). A member of a specific multicast group only receives data destined for this group. Data for other groups is not received. Usually, network adapters and network switches are able to filter network packets efficiently on hardware level, preventing a CPU load due to the multicast network traffic for those devices in the network, that are not part of the multicast group.
When multicasting is enabled for pylon, pylon automatically takes care of joining and leaving the multicast groups defined by the destination IP address. Keep in mind that some addresses from the multicast address range are reserved for general purposes. The address range from 239.255.0.0 to 239.255.255.255 is assigned by RFC 2365 as a locally administered address space. Use adresses in this range if you are not sure.
To enable multicast streaming, the controlling application must set the TransmissionType parameter to TransmissionType_Multicast
and set the DestinationAddr parameter to a valid multicast IP address. In addition to the address, a port must be specified. See the Selecting a Destination Port section for setting up the destination port that receives the data from the camera.
Example using the Device Specific Instant Camera for GigE:
Example (Low Level):
On protocol level, multicasting involves a so-called IGMP message (IGMP = Internet Group Management Protocol). To benefit from multicasting, managed network switches should be used. These managed network switches support the IGMP protocol and only forward multicast packets if there is a device connected that has joined the corresponding multicast group. If the switch does not support the IGMP protocol, multicast is equivalent to broadcasting.
When multiple cameras are to multicast in the same network, each camera should stream to a different multicast group. Streaming to different multicast groups reduces the CPU load and saves network bandwidth if the network switches used support the IGMP protocol.
Two cases must be differentiated:
For the first case, setting up a Stream Grabber for a monitoring application is quite easy. Since the controlling application has already configured the camera (i.e. the destination address and the destination port are set by the controlling application), these settings can be easily read from the camera. To let the monitoring application's Stream Grabber read the settings from the camera, the monitoring application must set the Stream Grabber's TransmissionType parameter to TransmissionType_UseCameraConfig
and then call the Stream Grabber's Open()
method.
Example using the Device Specific Instant Camera for GigE:
Example (low level):
For the second case, where the monitoring application opens the Stream Grabber object before the controlling application opens its Stream Grabber, the TransmissionType_UseCameraConfig
cannot be used. Instead, the controlling application and all monitoring applications must use the same settings for the following IP destination related parameters:
Note, when using broadcasts, the DestinationAddr parameter is read-only. Pylon will configure the camera for using the correct broadcast address.
When the controlling application and the monitoring application set the destination related parameters explicitly, it does not matter which application opens the Stream Grabber first.
The destination for the camera's data is specified by the destination IP address and the destination IP port. For multicasts, the monitoring and the controlling application must configure the Stream Grabbers for the same multicast IP address. Correspondingly, for broadcasts, the monitoring and the controlling application must use the same broadcast IP address that is automatically set by pylon.
In both cases, the controlling and the monitoring application must specify the same destination port. All applications must use a port that is not used on all PCs receiving the data stream. The destination port is set by using the Stream Grabber's DestinationPort parameter.
When a monitoring application sets the TransmissionType parameter to TransmissionType_UseCameraConfig
, a monitoring application automatically uses the port that the controlling application has written to the corresponding camera register. In that case, the controlling application must use a port that is not used for the PC where the controlling application is running on and that is not used for all PCs where monitoring applications are running on.
When the DestinationPort parameter is set to 0 pylon automatically selects an unused port. This is very convenient for applications using only unicast streaming. In the case of multicast or broadcast, a parameter value of 0 can only be used by the controlling application and only if the monitoring application uses the TransmissionType_UseCameraConfig
value for the TransmissionType parameter. Since there is no guarantee that the port auomatically chosen by the controlling application is not used on PCs where monitoring applications are running, we do not recommend to use this auto selection mechanism for the port for broadcast or multicast.
For broadcast or multicast setups grabbing images is realized in the same way as for unicast setups. Controlling and monitoring applications must allocate memory for grabbing, register the buffers at the Stream Grabber, enqueue the buffers and retrieve them back from the Stream Grabber. The only difference between monitoring application and controlling application is that only the controlling application starts and stops the image acquisition in the camera.
The pylon SDK contains a simple sample program called Grab_MultiCast. This sample illustrates how to set up a controlling application and a monitoring application for multicast.
The action command feature lets you trigger actions in multiple GigE devices (e.g. cameras) at roughly the same time or at a defined point in time (scheduled action command) by using a single broadcast protocol message (without extra cabling). Action commands are used in cameras in the same way as for example the digital input lines.
After setting up the camera parameters required for action commands the methods Pylon::IGigETransportLayer::IssueActionCommand or Pylon::IGigETransportLayer::IssueScheduledActionCommand can be used to trigger action commands. This is shown in the sample Grab_UsingActionCommand. The Pylon::CActionTriggerConfiguration is used setup the required camera parameters in the sample. The CActionTriggerConfiguration
is provided as header file. This makes it possible to see what parameters of the camera are changed. The code can be copied and modified for creating own configuration classes.
Consult the the camera User's Manual for more detailed information on action commands.
This section describes how to write the current values to file for those camera features that are readable and writable. It is also demonstrated how to write the saved feature values back to the device. Saving and restoring the camera features is performed by using the Pylon::CFeaturePersistence class.
Use the static Pylon::CFeaturePersistence::Save() method to save the current camera feature values to a file.
Use the static method Pylon::CFeaturePersistence::Load() to restore the camera values from a file.
The code snippets in this section are taken from the ParametrizeCamera_LoadAndSave sample.
This section describes how to transfer gain shading data to the camera using the GenICam FileIO functionality.
Camera devices supporting the gain shading feature store the shading data as files in the camera's internal file system. These files are accessed using the GenICam Filestream classes provided in the GenApi/Filestream.h header file.
GenICam defines two char based stream classes for easy to use read and write operations.
The ODevFileStream
class is used for uploading data to the camera's file system. The IDevFileStream
class is used for downloading data from the camera's file system.
Internally, the classes use the GenApi::FileProtocolAdapter class. The GenApi::FileProtocolAdapter class defines file based operations like open, close, read, and write.
One common parameter for these operations is the file name of the file to be used on the device file system. The file name must correspond to an existing file in the device file system. To retrieve a list of valid file names supported by the connected camera, read the entries of the "FileSelector" enumeration feature.
The camera device stores gain shading data in files named "UserGainShading1", "UserGainShading2", etc.
To upload gain shading data to the camera use the ODevFileStream
class.
This code snippet is taken from the ParametrizeCamera_Shading sample program.
Downloading shading data from the camera to a buffer is as simple as uploading shading data.
In applications, a separate thread is often dedicated to grabbing images. Typically, this grab thread must be synchronized with other threads of the application. For example, an application may want to signal the grab thread to terminate.
Synchronization can be realized by using Wait Objects. The concept of Wait Objects introduced in the Retrieving Grabbed Images section allows not only waiting until a grabbed buffer is available, but also getting informed about other events.
Wait Objects are an abstraction of operating system specific objects that can be either signaled or non-signaled. Wait Objects provide a wait operation that blocks until the Wait Object is signaled.
While the pylon interfaces return objects of the Pylon::WaitObject type, pylon provides the Pylon::WaitObjectEx class that is to be instantiated by user applications. Use the static factory method WaitObjectEx::Create() to create these wait objects.
The WaitObjectEx::Signal() method is used to signal a wait object. The WaitObjectEx::Reset() method can be used to put the Wait Object into the non-signaled state.
The Pylon::WaitObjects class is a container for Wait Objects and provides two methods of waiting for Wait Objects stored in the container:
The following code snippets illustrate how a grab thread uses the WaitForAny() method to simultaneously wait for buffers and a termination request.
After preparing for grabbing, the application's main thread starts the grab thread and sleeps for 5 seconds.
The grab thread sets up a Wait Object container holding the StreamGrabber's Wait Object and a Pylon::WaitObjectEx. The latter is used by the main thread to request the termination of grabbing:
Then the grab thread enters an infinite loop that starts waiting for any of the Wait Objects:
When the WaitForAny() method returns with true, the value of index
is used to determine whether a buffer has been grabbed or a request to terminate grabbing is pending:
The main thread signals the grab thread to terminate by calling the WaitObjectEx's
Signal() method:
Now the main thread can join with the grab thread.
It was demonstrated in the previous section how a Pylon::WaitObjectEx can be used to signal a thread to terminate.
As an alternative to using dedicated Wait Objects to get informed about external events, the WaitObject::WaitEx() method can be used for waiting. This wait operation can be interrupted. For the Windows version of pylon, WaitEx() can be interrupted by a queued APC or an I/O completion routine. The Linux and OS X version of pylon, WaitEx() can be interrupted by signals.
Example:
This code snippet has been taken from the WaitEx sample that comes with the pylon for Linux SDK.
Corresponding to the WaitObject::WaitEx() method, the Pylon::WaitObjects class provides the interruptable WaitForAnyEx() and WaitForAllEx() methods.
The following settings are recommended for applications that require image processing at a constant frame rate and with low jitter:
SetRTThreadPriority
method. The priority of the grab loop thread that is optionally provided by the Instant Camera object can be adjusted using the GrabLoopThreadPriorityOverride
and GrabLoopThreadPriority
parameters. InternalGrabEngineThreadPriorityOverride
and InternalGrabEngineThreadPriority
parameters.The Instant Camera classes use the Low Level API for operation. That means that the previous API, now called the Low Level API, is still part of the pylon C++ API and will be in the future. The Low Level API can be used for existing applications and for rare highly advanced use cases that cannot be covered using the Instant Camera classes. More information about how to program using the Low Level API can be found here.
Features, like 'Gain', are named according to the GenICam Standard Feature Naming Convention (SFNC). The SFNC defines a common set of features, their behavior, and the related parameter names. This ensures the interoperability of cameras from different camera vendors. Cameras compliant with the USB3 Vision standard are based on the SFNC version 2.0. Basler GigE and Firewire cameras are based on previous SFNC versions. Accordingly, the behavior of these cameras and some parameters names will be different.
If your code has to work with multiple camera device types that are compatible with different SFNC versions, you can use the GetSfncVersion() method to handle differences in parameter name and behavior. GetSfncVersion() is also supplied as function for the use with legacy code using the Low Level API.
Example for Generic Parameter Access :
Conditional compilation can be used to handle differences in parameter name and behavior when using Native Parameter Access :
The following tables show how to map previous parameter names to their equivalents defined in SFNC 2.0. Some previous parameters have no direct equivalents. There are previous parameters, however, that can still be accessed using the so-called alias . The alias is another representation of the original parameter. Usually the alias provides an Integer representation of a Float parameter.
The following code snippet shows how to get the alias:
The following table shows how to map changes for parameters:
Previous Parameter Name | SFNC 2.0 Equivalent | Parameter Type | Comments |
---|---|---|---|
AcquisitionFrameCount | AcquisitionBurstFrameCount | Integer | |
AcquisitionFrameRateAbs | AcquisitionFrameRate | Float | |
AcquisitionStartEventFrameID | EventFrameBurstStartFrameID | Integer | |
AcquisitionStartEventTimestamp | EventFrameBurstStartTimestamp | Integer | |
AcquisitionStartOvertriggerEventFrameID | EventFrameBurstStartOvertriggerFrameID | Integer | |
AcquisitionStartOvertriggerEventTimestamp | EventFrameBurstStartOvertriggerTimestamp | Integer | |
AutoExposureTimeAbsLowerLimit | AutoExposureTimeLowerLimit | Float | |
AutoExposureTimeAbsUpperLimit | AutoExposureTimeUpperLimit | Float | |
AutoFunctionAOIUsageIntensity | AutoFunctionAOIUseBrightness | Boolean | |
AutoFunctionAOIUsageWhiteBalance | AutoFunctionAOIUseWhiteBalance | Boolean | |
AutoGainRawLowerLimit | Alias of AutoGainLowerLimit | Integer | |
AutoGainRawUpperLimit | Alias of AutoGainUpperLimit | Integer | |
AutoTargetValue | Alias of AutoTargetBrightness | Integer | |
BalanceRatioAbs | BalanceRatio | Float | |
BalanceRatioRaw | Alias of BalanceRatio | Integer | |
BlackLevelAbs | BlackLevel | Float | |
BlackLevelRaw | Alias of BlackLevel | Integer | |
ChunkExposureTimeRaw | Integer | ChunkExposureTimeRaw has been replaced with ChunkExposureTime. ChunkExposureTime is of type float. | |
ChunkFrameCounter | Integer | ChunkFrameCounter has been replaced with ChunkCounterSelector and ChunkCounterValue. | |
ChunkGainAll | Integer | ChunkGainAll has been replaced with ChunkGain. ChunkGain is of type float. | |
ColorAdjustmentEnable | Boolean | ColorAdjustmentEnable has been removed. The color adjustment is always enabled. | |
ColorAdjustmentHueRaw | Alias of ColorAdjustmentHue | Integer | |
ColorAdjustmentReset | Command | ColorAdjustmentReset has been removed. | |
ColorAdjustmentSaturationRaw | Alias of ColorAdjustmentSaturation | Integer | |
ColorTransformationValueRaw | Alias of ColorTransformationValue | Integer | |
DefaultSetSelector | Enumeration | See additional entries in UserSetSelector. | |
ExposureEndEventFrameID | EventExposureEndFrameID | Integer | |
ExposureEndEventTimestamp | EventExposureEndTimestamp | Integer | |
ExposureTimeAbs | ExposureTime | Float | |
ExposureTimeRaw | Alias of ExposureTime | Integer | |
FrameStartEventFrameID | EventFrameStartFrameID | Integer | |
FrameStartEventTimestamp | EventFrameStartTimestamp | Integer | |
FrameStartOvertriggerEventFrameID | EventFrameStartOvertriggerFrameID | Integer | |
FrameStartOvertriggerEventTimestamp | EventFrameStartOvertriggerTimestamp | Integer | |
GainAbs | Gain | Float | |
GainRaw | Alias of Gain | Integer | |
GammaEnable | Boolean | GammaEnable has been removed. Gamma is always enabled. | |
GammaSelector | Enumeration | The sRGB setting is automatically applied when LineSourcePreset is set to any other value than Off. | |
GlobalResetReleaseModeEnable | Boolean | GlobalResetReleaseModeEnable has been replaced with the enumeration ShutterMode. | |
LightSourceSelector | LightSourcePreset | Enumeration | |
LineDebouncerTimeAbs | LineDebouncerTime | Float | |
MinOutPulseWidthAbs | LineMinimumOutputPulseWidth | Float | |
MinOutPulseWidthRaw | Alias of LineMinimumOutputPulseWidth | Integer | |
ParameterSelector | RemoveParameterLimitSelector | Enumeration | |
ProcessedRawEnable | Boolean | ProcessedRawEnable has been removed because it is not needed anymore. The camera uses nondestructive Bayer demosaicing now. | |
ReadoutTimeAbs | SensorReadoutTime | Float | |
ResultingFrameRateAbs | ResultingFrameRate | Float | |
SequenceAddressBitSelector | Enumeration | ||
SequenceAdvanceMode | Enumeration | ||
SequenceAsyncAdvance | Command | Configure a asynchronous signal as trigger source of path 1. | |
SequenceAsyncRestart | Command | Configure a asynchronous signal as trigger source of path 0. | |
SequenceBitSource | Enumeration | ||
SequenceControlConfig | Category | ||
SequenceControlSelector | Enumeration | ||
SequenceControlSource | Enumeration | ||
SequenceCurrentSet | SequencerSetActive | Integer | |
SequenceEnable | Boolean | Replaced by SequencerConfigurationMode and SequencerMode. | |
SequenceSetExecutions | Integer | ||
SequenceSetIndex | SequencerSetSelector | Integer | |
SequenceSetLoad | SequencerSetLoad | Command | |
SequenceSetStore | SequencerSetSave | Command | |
SequenceSetTotalNumber | Integer | Use the range of the SequencerSetSelector. | |
TestImageSelector | TestPattern | Enumeration | TestPattern instead of TestImageSelector is used for dart and pulse camera models. |
TimerDelayAbs | TimerDelay | Float | |
TimerDelayRaw | Alias of TimerDelay | Integer | |
TimerDelayTimebaseAbs | Float | The time base is always 1us. | |
TimerDurationAbs | TimerDuration | Float | |
TimerDurationRaw | Alias of TimerDuration | Integer | |
TimerDurationTimebaseAbs | Float | The time base is always 1us. | |
TriggerDelayAbs | TriggerDelay | Float | |
UserSetDefaultSelector | UserSetDefault | Enumeration |
The following table shows how to map changes for enumeration values:
Previous Enumeration Name | Previous Enumeration Value Name | Value Name SFNC 2.0 | Comments |
---|---|---|---|
AcquisitionStatusSelector | AcquisitionTriggerWait | FrameBurstTriggerWait | |
AutoFunctionProfile | ExposureMinimum | MinimizeExposureTime | |
AutoFunctionProfile | GainMinimum | MinimizeGain | |
ChunkSelector | GainAll | Gain | The gain value is reported via the ChunkGain node as float. |
ChunkSelector | Height | Height is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetX | OffsetX is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetY | OffsetY is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | PixelFormat | PixelFormat is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Stride | Stride is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Width | Width is part of the image information regardless of the chunk mode setting. | |
EventNotification | GenICamEvent | On | |
EventSelector | AcquisitionStartOvertrigger | FrameBurstStartOvertrigger | |
EventSelector | AcquisitionStart | FrameBurstStart | |
LightSourceSelector | Daylight | Daylight5000K | |
LightSourceSelector | Tungsten | Tungsten2800K | |
LineSelector | Out1 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out2 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out3 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out4 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSource | AcquisitionTriggerWait | FrameBurstTriggerWait | |
LineSource | UserOutput | Use UserOutput1, UserOutput2, or UserOutput3 etc. instead. | |
PixelFormat | BayerBG12Packed | The pixel format BayerBG12p is provided by USB camera devices. The memory layout of pixel format BayerBG12Packed and pixel format BayerBG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGB12Packed | The pixel format BayerGB12p is provided by USB camera devices. The memory layout of pixel format BayerGB12Packed and pixel format BayerGB12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGR12Packed | The pixel format BayerGR12p is provided by USB camera devices. The memory layout of pixel format BayerGR12Packed and pixel format BayerGR12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerRG12Packed | The pixel format BayerRG12p is provided by USB camera devices. The memory layout of pixel format BayerRG12Packed and pixel format BayerRG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BGR10Packed | BGR10 | |
PixelFormat | BGR12Packed | BGR12 | |
PixelFormat | BGR8Packed | BGR8 | |
PixelFormat | BGRA8Packed | BGRa8 | |
PixelFormat | Mono10Packed | The pixel format Mono10p is provided by USB camera devices. The memory layout of pixel format Mono10Packed and pixel format Mono10p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono12Packed | The pixel format Mono12p is provided by USB camera devices. The memory layout of pixel format Mono12Packed and pixel format Mono12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono1Packed | Mono1p | |
PixelFormat | Mono2Packed | Mono2p | |
PixelFormat | Mono4Packed | Mono4p | |
PixelFormat | RGB10Packed | RGB10 | |
PixelFormat | RGB12Packed | RGB12 | |
PixelFormat | RGB16Packed | RGB16 | |
PixelFormat | RGB8Packed | RGB8 | |
PixelFormat | RGBA8Packed | RGBa8 | |
PixelFormat | YUV411Packed | YCbCr411_8 | |
PixelFormat | YUV422_YUYV_Packed | YCbCr422_8 | |
PixelFormat | YUV444Packed | YCbCr8 | |
TestImageSelector | Testimage1 | GreyDiagonalSawtooth8 | GreyDiagonalSawtooth8 instead of Testimage1 is used for dart and pulse camera models. |
TriggerSelector | AcquisitionStart | FrameBurstStart |
The pylon USB device offers a migration mode for convenience. If the migration mode is activated, the changes shown in the tables above are automatically mapped, if a mapping exists. The migration mode supports writing code when working with multiple camera device types that are compatible with different SFNC versions. However, it is strongly recommended to adapt existing code to be SFNC 2.0 compatible, if you are only working with SFNC 2.0 compatible cameras instead of using the migration mode.
The migration mode is implemented using proxy objects. If the migration mode is enabled, the call to Pylon::CInstantCamera::GetNodeMap() (or Pylon::IPylonDevice::GetNodeMap()) returns a proxy object that wraps the original node map. The node map proxy maps parameter changes in calls to GenApi::INodeMap::GetNode(). All other calls are forwarded to the original node map. Enumerations having renamed enumeration values are also wrapped by a proxy, e.g. the enumeration PixelFormat. The enumeration proxy maps value name changes in the calls GenApi::IValue::ToString(), GenApi::IValue::FromString(), and GenApi::IEnumeration::GetEntryByName(). All other calls are forwarded to the original enumeration node.
The image transport of USB camera devices and Firewire or GigE camera devices is different. Firewire and GigE camera devices automatically send image data to the PC when available. If the PC is not ready to receive the image data because no grab buffer is available, the image data sent by the camera device is dropped. For USB camera devices the PC has to actively request the image data. Grabbed images are stored in the frame buffer of the USB camera device until the PC requests the image data. If the frame buffer of the USB camera device is full, newly acquired frames will be dropped. Old images in the frame buffer of the USB camera device will be grabbed first the next time the PC requests image data. After that, newly acquired images are grabbed.
The Upcoming Image grab strategy uses the effect that images are automatically dropped, if no buffer is available (queued) on the PC when using GigE or Firewire cameras. USB camera devices work differently as described above. Old images can still be stored in the frame buffer of the USB camera device. That's why the Upcoming Image strategy cannot be used for USB camera devices. An exception will be thrown, if a USB camera device is used together with the Upcoming Image grab strategy.
Image data is transferred between a PC and a USB camera device using a certain sequence of data packets. In the rare case of an error during the image transport, the image data stream between PC and USB camera device is reset automatically, e.g. if the image packet sequence is out of sync. The image data stream reset causes the Block ID delivered by the USB camera device to start again at zero. Pylon indicates this error condition by setting the Block ID of the grab result to its highest possible value (UINT64_MAX) for all subsequent grab results. A Block ID of UINT64_MAX is invalid and cannot be used in any further operations. The image data and other grab result data are not affected by the Block ID being invalid. The grabbing needs to be stopped and restarted to recover from this error condition if the application uses the Block ID. The Block ID starts at zero if the grabbing is restarted.