This section introduces several concepts that are essential for understanding the pylon C API.
The pylon C API builds upon GenApi, a software framework that provides a high-level API for generic access to all compliant digital cameras, hiding the peculiarities of the particular interface technology used. Accordingly, the application developer can focus on the functional aspects of the program to be developed. Due to the abstraction provided by GenApi, programs need not be adjusted to work with different types of camera interfaces. Even applications can be addressed where different camera interfaces are used at the same time.
The dependency of pylon C upon GenApi shows in some places, mostly where names of functions or other entities start with a GenApi prefix. Wherever this is the case, an element of the underlying GenApi layer is directly exposed to the pylon C user.
The pylon C API defines several data entities termed 'objects'. These are used to expose certain aspects of the pylon C functionality to the user. For example, there is a stream grabber object that serves the purpose of receiving image data streamed by a camera (which, in turn, is also represented by an object called a camera object).
Inside a program, every object is represented and uniquely identified by a handle. A function performing an action that involves an object is passed a handle for that object. Handles are type-safe, which is to say that handles representing different kinds of objects are of different types. Accordingly, the C language type system is able to detect errors such as passing a wrong kind of object to a function call.Furthermore, handles are unique in the sense that no two handles representing two different objects will ever be equal when they are compared. This is even true if the comparison is made between two handles of different types after they were forcefully cast to a common type.
In pylon C, physical camera devices are represented by camera objects (sometimes also referred to as device objects). A camera object handle has the type of PYLON_DEVICE_HANDLE
.
A camera object is used for:
The term 'transport layer' is used as an abstraction for a physical interface such as FireWire, Gigabit Ethernet (GigE), USB, or Camera Link. For each of these interfaces, there are drivers that provide access to camera devices. As an abstraction of these drivers, a transport layer provides the following functionality:
pylon C currently includes four different transport layers:
A transport layer is strictly an internal concept of the pylon C API that application writers need not be concerned with, as there is no user-visible entity related to it. This means there is no 'transport layer object' in pylon C. As every camera has exactly one transport layer, it is, for all practical purposes, considered an integral part of the camera object. However, being aware of the transport layer concept may be useful for properly understanding device enumeration and communication.
Typically, pylon C applications are event-driven. This means that such applications, or the threads running within them, will often wait for some condition to become true, for example, a buffer with image data to become available.
pylon C provides a generalized mechanism for applications to wait for externally generated events, based on the concepts of wait objects and wait object containers. Wait objects provide an abstraction layer for operating system-specific synchronization mechanisms. Events in pylon C include image data that become available at a stream grabber (see Retrieving Grabbed Images), or event data that become available at an event grabber. With wait objects, pylon C provides a mechanism for applications to wait for these events. Moreover, applications can create wait objects of their own that can be explicitly signaled.
Wait objects can be grouped into wait object containers, and wait functions are provided by pylon C for the application to wait until either any one or all wait objects in a container are signaled. This way, events originating from multiple sources can be processed by a single thread.
Wait objects are represented by handles of the PYLON_WAITOBJECT_HANDLE
type, while handles of the PYLON_WAITOBJECTS_HANDLE
type represent wait object containers.
A camera object, as defined by the pylon C architecture, is capable of delivering one or more streams of image data (see below for an exception). To grab images from a stream, a stream grabber object is required. Stream grabber objects cannot be created directly by an application. They are managed by camera objects, which create and pass out stream grabbers. All stream grabbers expose the very same interface, regardless of the transport mechanism they use for data transfer. This means that for all transport layers, images are grabbed from streams in exactly the same way. The details of grabbing images are described in the Grabbing Images section below.
If a camera is capable of delivering multiple data streams, its device object will provide a stream grabber for each data stream. A device object can report the number of provided stream grabbers. Stream grabber objects are represented by handles of the PYLON_STREAMGRABBER_HANDLE
type. Section Grabbing Images describes their use in detail.
In addition to sending image data streams, some cameras are capable of sending event messages to inform the application about certain conditions that arise. For example, a camera may send an event message when the image acquisition process is complete within the camera, but before the image data are actually transferred out of the camera. The application might need this information to know when it is safe to start a handling system that moves the next part into position for a subsequent acquisition, without having to wait for the image data to arrive.
Event grabber objects are used to receive event messages. Retrieving and processing event messages is described below in the Handling Camera Events section.
Event grabber objects are represented by handles of the PYLON_EVENTGRABBER_HANDLE
type.
If the so-called chunk mode is activated, Basler cameras can send additional information appended to the image data. When chunk mode is enabled, the camera sends an extended data stream consisting of the image data combined with additional information, such as a frame number or a time stamp. The extended data stream is self-descriptive. pylon C chunk parser objects are used for parsing the extended data stream and for providing access to the additional information. Use of chunk parser objects is explained in the Chunk Parser: Accessing Chunk Features section.
Chunk parser objects are represented by handles of the PYLON_CHUNKPARSER_HANDLE
type.
The behavior of some kinds of objects (camera objects in particular) can be controlled by the application through a set of related parameters (sometimes also called features). Parameters are named entities having a value that may or may not be readable or writable by the application. Writing a new value to an object's parameter will generally modify the behavior of that object.
Every parameter has an associated type. There are currently six different types defined:
Every parameter also has an associated access mode that determines the kind of access allowed. There are currently four access modes defined:
Parameters can be both readable and writable at the same time.
Throughout this document, a distinction is made between image acquisition, image data transfer, and image grabbing. It is essential to understand the exact meaning of these terms.
The operations performed internally by the camera to produce a single image are collectively termed image acquisition. This includes, among other things, controlling exposure of the image sensor and sensor read-out. This process eventually results in the camera being ready to transfer image data out of the camera to the computer. Image data transfer designates the transfer of the aquired data from the camera's memory to the computer through the camera's interface, e.g. FireWire or Gigabit Ethernet. The process of writing the image data to the computer's main memory is referred to as image grabbing.
When debugging a pylon application using GigE cameras you may encounter heartbeat timeouts. The application must send special network packets to the camera in defined intervals. If the camera doesn't receive these heartbeats it will consider the connection as broken and won't accept any commands from the application. This requires setting the heartbeat timeout of a camera to a higher value when debugging. The build topics section shows how to do this.
The pylon C runtime system must be initialized before use. A pylon based application must call the PylonInitialize()
function before using any other functions of the pylon C runtime system.
Before an application exits, it must call the PylonTerminate()
function to free resources allocated by the pylon C runtime system.
All pylon C API functions return a value of the GENAPIC_RESULT (see Error Codes) type, defined in GenApiCError.h. The return value is GENAPI_E_OK
if the function completed normally without detecting any errors. Otherwise, an error code is returned. This will either be one of the GENAPI_E_XXX
error codes defined in GenApiCError.h, if the error is detected in the GenApi layer that forms the basis of pylon C, or one of the PYLON_E_XXX
error codes defined in PylonCError.h.
In addition to returning an error code, pylon C functions set up a textual error description that applications can retrieve. It consists of two parts that can be accessed via GenApiGetLastErrorMessage()
and GenApiGetLastErrorDetail()
. The string returned by GenApiGetLastErrorMessage()
contains a concise description of the most recent error, suitable to be displayed to the user as part of an error message. Additional error information is returned by GenApiGetLastErrorDetail()
; this error information is intended to aid in identifying the conditions that caused the error.
This is what a typical error handler might look like:
All programming examples use a macro to check for error conditions and conditionally invoke the above error handler:
In pylon C, camera devices are managed by means of 'camera objects'. A camera object is a software abstraction and is represented by a handle of the PYLON_DEVICE_HANDLE
type. Available devices are discovered dynamically, using facilities provided by the transport layer.
Device discovery (aka enumeration) is a two-step process. In the first step, the PylonEnumerateDevices()
function returns, in its numDevices argument,the total number of camera devices detected for all interfaces. Assuming this value is N, you can then access every camera using a numeric index from the range [0 .. N-1]. In the second step, PylonGetDeviceInfo() is called for every index value in turn. By looking at the fields of the PylonDeviceInfo_t
struct, every individual camera can be identified. A call to PylonGetDeviceInfoHandle()
then translates the device index to a PYLON_DEVICE_INFO_HANDLE
that can be used to query device properties. Finally, a device object (represented by a PYLON_DEVICE_HANDLE
) can be created by calling PylonCreateDeviceByIndex()
. A PYLON_DEVICE_HANDLE
is required for all operations involving a device.
The code snippet below illustrates device enumeration and creation:
If an application is done using a device, the device handle must be destroyed:
Before access to camera parameters is possible, the transport layer must be initialized and a connection to the physical camera device must be established. This is achieved by calling the PylonDeviceOpen()
function.
To release the connection to a device, and to free all related resources, call the PylonDeviceClose()
function.
This section describes how a camera object is used to configure camera device parameters. For a discussion of all relevant concepts, see Parameters. Parameters are identified by their names. After opening the pylon Viewer, you can easily browse through all parameters that are available for a particular type of camera. This is described in more detail under Browsing Parameters.
All functions that work on parameters respect accessibility. If the desired kind of access is not (currently) possible, error messages are returned accordingly. It is also possible to check for sufficient accessibility beforehand, using one of the following functions: PylonDeviceFeatureIsImplemented()
, PylonDeviceFeatureIsAvailable()
, PylonDeviceFeatureIsReadable()
, or PylonDeviceFeatureIsWritable()
.
The next code snippet demonstrates how to read and set an integer parameter:
32 bit variants of the integer access functions are also provided for convenience. These allow to handle the common case where all values are known to be 32 bit entities more easily:
Setting float parameters is similar, but there is no increment:
Setting boolean parameters is even simpler:
An enumeration parameter can only be set to one of the members of a predefined set:
The next code snippet demonstrates use of a command parameter:
All kinds of parameters can be accessed as strings, as demonstrated by the following code snippet:
The easiest way to grab an image using pylon C API is to call the PylonDeviceGrabSingleFrame()
function. First, set up the camera using the methods described in Camera Configuration. Then call PylonDeviceGrabSingleFrame()
to grab the image. It will adjust all neccessary parameters and grab an image into the buffer passed. This is shown in the following code snippet:
PylonDeviceGrabSingleFrame()
is quite easy there are some limitations. It will, for instance, involve much set up and shutdown work for each invocation of the function, thus causing considerable overhead and execution time. The following sections describe the use of stream grabber objects. The order of the section reflects the sequence in which a typical grab application will use a stream grabber object.
Stream grabber objects are managed by camera objects. The number of stream grabbers provided by a camera can be determined using the PylonDeviceGetNumStreamGrabberChannels()
function. The PylonDeviceGetStreamGrabber()
function returns a PYLON_STREAMGRABBER_HANDLE
. Prior to retrieving a stream grabber handle, the camera device must have been opened. Please take note of the fact that the value returned from PylonDeviceGetNumStreamGrabberChannels()
may be 0, as some camera devices, e.g. Camera Link cameras, have no stream grabber. These cameras can still be parameterized as described, but grabbing is not supported for them. Before use, stream grabbers must be opened by a call to PylonStreamGrabberOpen()
. When acquiring images is finished the stream grabber must be closed by a call to PylonStreamGrabberClose()
.
A stream grabber also provides a wait object for the application to be notified whenever a buffer containing new image data becomes available.
Example:
PYLON_STREAMGRABBER_HANDLE
. This also means that, if the camera object owning the stream grabber is deleted by calling PylonDestroyDevice()
on it, the related stream grabber handle will become invalid.Independent of the physical camera interface used, every stream grabber provides two mandatory parameters:
A grab application must set the above two parameters before grabbing begins. pylon C provides a set of convenience functions for easily accessing these parameters: PylonStreamGrabberSetMaxNumBuffer()
, PylonStreamGrabberGetMaxNumBuffer()
, PylonStreamGrabberSetMaxBufferSize()
and PylonStreamGrabberGetMaxBufferSize()
.
Depending on the transport technology, a stream grabber can provide further parameters such as streaming-related timeouts. All these parameters are initially set to reasonable default values, so that grabbing works without having to adjust them. An application can gain access to these parameters using the method described in Generic Parameter Access.
Depending on the transport layer used for grabbing images, a number of system resources may be required, for example:
A call to PylonStreamGrabberPrepareGrab()
allocates all required resources and causes the camera object to change its state. For a typical camera, any parameters affecting resource requirements (AOI, pixel format, binning, etc.) will be read-only after the call to PylonStreamGrabberPrepareGrab()
. These parameters must be set up beforehand and cannot be changed while the camera object is in this state.
All pylon C transport layers utilize user-provided buffer memory for grabbing image and chunk data. An application is required to register the data buffers it intends to use with the stream grabber by calling PylonStreamGrabberRegisterBuffer()
for each data buffer. This is necessary for performance reasons, allowing the stream grabber to prepare and cache internal data structures used to deal with user-provided memory. The call to PylonStreamGrabberRegisterBuffer()
returns a handle for the buffer, which is used during later steps.
Example:
The buffer registration mechanism transfers ownership of the buffers to the stream grabber. An application must never deallocate the memory belonging to buffers that are still registered. Freeing the memory is not allowed unless the buffers are deregistered by calling PylonStreamGrabberDeregisterBuffer()
first.
Every stream grabber maintains two different buffer queues, an input queue and an output queue. The buffers to be used for grabbing must be fed to the grabber's input queue. After grabbing, buffers containing image data can be retrieved from the grabber's output queue.
The PylonStreamGrabberQueueBuffer()
function is used to append a buffer to the end of the grabber's input queue. It takes two parameters, a buffer handle and an optional pointer to application-specific context information. Along with the data buffer, the context pointer is passed back to the user when retrieving the buffer from the grabber's output queue. The stream grabber does not access the memory to which the context pointer points in any way.
Example:
After buffers have been queued, the stream grabber is ready to grab image data into them, but acquisition must be started explicitly.
To start image acquisition, use the camera's AcquisitionStart
parameter. AcquisitionStart
is a command parameter, which means that calling PylonDeviceExecuteCommandFeature() for the AcquisitionStart
parameter sends an 'acquisition start' command to the camera.
A camera device typically provides two acquisition modes:
To be precise, the acquisition start command does not necessarily start acquisition in the camera immediately. If either external triggering or software triggering is enabled, the acquisition start command prepares the camera for image acquisition. Actual acquisition starts when the camera senses an external trigger signal or receives a software trigger command.
When the camera's continuous acquisition mode is enabled, the AcquisitionStop
parameter must be used to stop image acquisition.
Normally, a camera starts to transfer image data as soon as possible after acquisition. There is no specific command to start the image transfer.
Example:
Image data is written to the buffer(s) in the stream grabber's input queue. When a buffer is filled with data, the stream grabber places it on its output queue, from which it can then be retrieved by the user application.
There is a wait object associated with every stream grabber's output queue. This wait object allows the application to wait until either a grabbed image arrives at the output queue or a timeout expires.
When the wait operation returns successfully, the grabbed buffer can be retrieved using the PylonStreamGrabberRetrieveResult()
function. It uses a PylonGrabResult_t
struct to return information about the grab operation:
This also removes the buffer from the output queue. Ownership of the buffer is returned to the application. A buffer retrieved from the output queue will not be overwritten with new image data until it is placed on the grabber's input queue again.
Remember, a buffer retrieved from the output queue must be deregistered before its memory can be freed.
Use the buffer handle from the PylonGrabResult_t
struct to requeue a buffer to the grabber's input queue.
When the camera ceases to send data, all not yet processed buffers remain in the input queue until the PylonStreamGrabberCancelGrab()
function is called. PylonStreamGrabberCancelGrab()
puts all buffers from the input queue to the output queue, including any buffer currently being filled. Checking the status of the PylonGrabResult_t
struct returned by PylonStreamGrabberRetrieveResult()
, allows to determine whether a buffer has been canceled.
The following example shows a typical grab loop:
If the camera is set for continuous acquisition mode, acquisition should first be stopped:
After stopping the camera you must ensure that all buffers waiting in the input queue will be moved to the output queue. You do this by calling the PylonStreamGrabberCancelGrab()
function. This will move all pending buffers from the input queue to the output queue and mark them as canceled.
An application should retrieve all buffers from the grabber's output queue before closing a stream grabber. Prior to deallocating their memory, deregister the buffers. After all buffers have been deregistered, call the PylonStreamGrabberFinishGrab()
function to release all resources allocated for grabbing. PylonStreamGrabberFinishGrab()
must not be called when there are still buffers in the grabber's input queue.
The last step is to close the stream grabber by calling PylonStreamGrabberClose()
.
Example:
A complete sample program for acquiring images with a GigE camera in continuous mode can be found here: OverlappedGrab Sample. The sample program is included in the installation archive in Samples/C/OverlappedGrab.
Using the PylonWaitObjectWait()
and PylonWaitObjectWaitEx()
functions, an application can wait for a single wait object to became signaled. This has already been demonstrated as part of the grab loop example presented in Retrieving Grabbed Images. However, it is much more common for an application to wait for events from multiple sources. For this purpose, pylon C defines a wait object container, represented by a PYLON_WAITOBJECTS_HANDLE
handle. Wait objects can be added to a container by calling PylonWaitObjectsAdd()or
PylonWaitObjectsAddMany()
. Once the wait objects are added to a container, an application can wait for the wait objects to become signaled:
PylonWaitObjectsWaitForAny()
and PylonWaitObjectsWaitForAnyEx()
block until any single wait object in a container is signaled, while PylonWaitObjectsWaitForAll()
and PylonWaitObjectsWaitForAllEx()
block until all objects in the container are signaled.The following code snippets illustrate how a grab thread uses the PylonWaitObjectsWaitForAny()
function to simultaneously wait for buffers and a termination request. The snippets are taken from the GrabTwoCameras sample program installed as part of the pylon C SDK.
The program grabs images for 5 seconds and then exits. First, the program creates a wait object container to hold all its wait objects. It then creates a system-dependent timer, which is transformed into a pylon C wait object. The wait object is then added to the container.
In this code snippet, multiple cameras are used for simultaneous grabbing. Every one of these cameras has a stream grabber, which in turn has a wait object. All these wait objects are added to the container, too. This is achieved by executing the following statements in a loop, once for every camera:
At the beginning of the grab loop, PylonWaitObjectsWaitForAny()
is called. The index value returned is used to determine whether a buffer has been grabbed or the timer has expired. This means that the program should stop grabbing and exit:
Finally, during cleanup the timer wait object is destroyed. This frees the timer handle included within it.
The PylonWaitObjectsWaitForAnyEx()
and PylonWaitObjectsWaitForAllEx()
functions, as well as PylonWaitObjectWaitEx(), take an additional boolean argument Alertable, that allows the caller to specify whether the wait operation should be interruptible or not. An interruptible wait is terminated prematurely whenever a certain asynchronous system event (a user APC on Windows, or a signal on Unix) happens. This rarely-needed feature has special uses that are beyond the scope of this document.
Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras used with Basler pylon software can send event messages. For example, when a sensor exposure has finished, the camera can send an end-of-exposure event to the computer. The event can be received by the computer before the image data for the finished exposure has been completely transferred. Retrieval and processing of event messages is described in this section.
Receiving event data sent by a camera is accomplished in much the same way as receiving image data. While the latter involves use of a stream grabber, an event grabber is used for obtaining events.
Event grabbers can be obtained by PylonDeviceGetEventGrabber()
.
The camera object owns event grabbers created this way and manages their lifetime.
Unlike stream grabbers, event grabbers use internal memory buffers for receiving event messages. The number of buffers can be parameterized through the PylonEventGrabberSetNumBuffers() function:
PylonEventGrabberOpen()
.A connection to the device and all resources required for receiving events are allocated by calling PylonEventGrabberOpen()
. After that, a wait object handle can be obtained for the application to be notified of any occurring events.
Sending of event messages must be explicitly enabled on the camera by setting its EventSelector
parameter to the type of the desired event. In the following example the selector is set to the end-of-exposure event. After this, sending events of the desired type is enabled through the EventNotification
parameter:
To be sure that no events are missed, the event grabber should be prepared before event messages are enabled (see the Getting and Preparing Event Grabbers section above).
The following code snippet illustrates how to disable the sending of end-of-exposure events:
Receiving event messages is very similar to grabbing images. The event grabber provides a wait object that is signaled whenever an event message becomes available. When an event message is available, it can be retrieved by calling PylonEventGrabberRetrieveEvent()
.
In typical applications, waiting for grabbed images and event messages is done in one common loop. This is demonstrated in the following code snippet:
While the previous section explained how to receive event messages, this section describes how to interpret them.
The specific layout of event messages depends on the event type and the camera type. The pylon C API uses support from GenICam for parsing event messages. This means that the message layout is described in the camera's XML description file.
As described in the Generic Parameter Access section, a GenApi node map is created from the camera's XML description file. This node map contains node objects representing the elements of the XML file. Since the layout of event messages is also described in the camera description file, the information carried by the event messages is exposed as nodes in the node map. These can be accessed just like any other node.
For example, an end-of-exposure event carries the following information:
An event adapter is used to update the event-related nodes of the camera's node map. Updating the nodes is done by passing the event message to an event adapter.
Event adapters are created by camera objects:
To update any event-related nodes, call PylonEventAdapterDeliverMessage()
for every event message received:
The previous section described how event adapters are used to push the contents of event messages into a camera object's node map. The PylonEventAdapterDeliverMessage()
function updates all nodes related to events contained in the message passed in.
As described in the Getting Notified About Parameter Changes section, it is possible to register callback functions that are called when nodes may have been changed. These callbacks can be used to determine if an event message contains a particular kind of event. For example, to get informed about end-of-exposure events, a callback for one of the end-of-exposure event-related nodes must be installed. The following code snippet illustrates how to install a callback function for the ExposureEndFrameId node:
The registered callback will be called by pylon C from the context of the PylonEventAdapterDeliverMessage()
function.
PylonEventAdapterDeliverMessage()
can issue multiple calls to a callback function when multiple events of the same type are contained in the message.Before closing and destroying the camera object, the event-related objects must be closed as illustrated in the following code snippet:
The code snippets in this chapter are taken from the 'Events' sample program (see Events Sample) included in the installation archive in Samples/C/Events.
Basler cameras are capable of sending additional information appended to the image data as chunks of data, such as frame counters, time stamps, and CRC checksums. The information included in the chunk data is presented to an application in the form of parameters that receive their values from the chunk parsing mechanism. This section explains how to enable the chunk features and how to access the chunk data.
Before a feature producing chunk data can be enabled, the camera's chunk mode must be enabled:
After having been set to chunk mode, the camera transfers data blocks that are partitioned into a sequence of chunks. The first chunk is always the image data. When chunk features are enabled, the image data chunk is followed by chunks containing the information generated by the chunk features.
Once chunk mode is enabled, chunk features can be enabled:
Grabbing from an image stream with chunks is very similar to grabbing from an image stream without chunks. Memory buffers must be provided that are large enough to store both the image data and the added chunk data.
The camera's PayloadSize
parameter reports the necessary buffersize (in bytes):
Once the camera has been set to produce chunk data, and data buffers have been set up taking into account the additional buffer space required to hold the chunk data, grabbing works exactly the same as in the 'no chunks' case.
The data block containing the image chunk and the other chunks has a self-descriptive layout. Before accessing the data contained in the appended chunks, the data block must be parsed by a chunk parser.
The camera object is responsible for creating a chunk parser:
Once a chunk parser is created, grabbed buffers can be attached to it. When a buffer is attached to a chunk parser, it is parsed and access to its data is provided through camera parameters.
Chunk data integrity may be protected by an optional checksum. To check for its presence, use PylonChunkParserHasCRC()
.
Before re-using a buffer for grabbing, the buffer must be detached from the chunk parser.
After detaching a buffer, the next grabbed buffer can be attached and the included chunk data can be read.
After grabbing is finished, the chunk parser must be deleted:
The code snippets in this chapter are taken from the 'Chunks' sample program (see Chunks Sample) included in the installation archive in Samples/C/Chunks.
Callback functions can be installed that are called whenever a camera device is removed. As soon as the PylonDeviceOpen()
function has been called, callback functions of the PylonDeviceRemCb_t
type can be installed for it.
Installing a callback function:
All registered callbacks must be deregistered before calling PylonDeviceClose()
.
This is the actual callback function. It does nothing besides incrementing a counter.
The code snippets in this section are taken from the 'SurpriseRemoval' sample program (see SurpriseRemoval Sample) included in the installation archive in Samples/C/SurpriseRemoval.
For camera configuration and for accessing other parameters, the pylon API uses the technologies defined by the GenICam standard hosted by the European Machine Vision Association (EMVA). The GenICam specification (http://www.GenICam.org) defines a format for camera description files. These files describe the configuration interface of GenICam compliant cameras. The description files are written in XML (eXtensible Markup Language) and describe camera registers, their interdependencies, and all other information needed to access high-level features such as Gain
, Exposure
Time
, or Image
Format
by means of low-level register read and write operations.
The elements of a camera description file are represented as software objects called nodes. For example, a node can represent a single camera register, a camera parameter such as Gain, a set of available parameter values, etc. Nodes are represented as handles of the NODE_HANDLE
type.
Nodes are linked together by different relationships as explained in the GenICam standard document available at www.GenICam.org. The complete set of nodes is stored in a data structure called a node map. At runtime, a node map is instantiated from an XML description, which may exist as a disk file on the computer connected to a camera, or may be read from the camera itself. Node map objects are represented by handles of the NODEMAP_HANDLE
type.
Every node has a name, which is a text string. Node names are unique within a node map, and any node can be looked up by its name. All parameter access functions presented so far are actually shortcuts that get a node map handle from an object, look up a node that implements a named parameter, and finally perform the desired action on the node, such as assigning a new value, for example. The sample code below demonstrates how to look up a parameter node with a known name. If no such node exists, GenApiNodeMapGetNode()
returns an invalid handle. This case needs to be handled by the program like in the sample code below, but a real program may want to handle this case differently.
Nodes are generally grouped into categories, which themselves are represented as nodes of the Category type. A category node is an abstraction for a certain functional aspect of a camera, and all parameter nodes grouped under it are related to this aspect. For example, the 'AOI Controls' category might contain an 'X Offset, a 'Y Offset', a 'Width', and a 'Height' parameter node. The topological structure of a node map is that of a tree, with parameter nodes as leaves and category nodes as junctions. The sample code below traverses the tree, displaying every node found:
In order to access a parameters value, a handle for the corresponding parameter node must be obtained first, as demonstrated in the example below for an integer feature:
So far, only camera node maps have been considered. However, there are more objects that expose parameters through node maps:
PylonDeviceGetTLNodeMap()
function returns the node map for a device's transport layer. PylonStreamGrabberGetNodeMap()
function is used to access a stream grabber's parameters. PylonEventGrabberGetNodeMap()
function is used to access an event grabber's parameters. Parameter access works identical for all types of node maps, and the same set of functions is used as for camera node maps. It should be noted, however, that the objects listed above, transport layers in particular, may not have any parameters at all. In this case, a call to the corresponding function would return GENAPIC_INVALID_HANDLE
. Currently, this is true for the IEEE 1394 transport layer.The pylon Viewer tool provides an easy way of browsing camera parameters, their names, values, and ranges. Besides grabbing images (not available for Camera Link cameras ) it is capable of displaying all node maps for a camera device, and all parameter nodes contained therein. The pylon Viewer tool has a Features window that displays a tree view of node maps, categories, and parameter nodes. Selecting a node in this view opens a dialog that displays the node's current value (if applicable), and may also allow to change it, subject to accessibility. There is also a Feature Documentation window, located at the very bottom of the display unless the layout was changed from the standard layout. The Feature Documentation window displays detailed information about the currently selected node.
The pylon C API provides the functionality for installing callback functions that will be called when a parameter's value or state (e.g. the access mode or value range) was changed.
Every callback is installed for a specific parameter. If the parameter itself has been touched or if another parameter that could possibly influence the state of the parameter has been changed, the callback will be invoked.
The example below illustrates how to find a parameter node and register a callback:
As an optimization, nodes that can only change their values as a direct result of some user action (an application writing a new value) can have their values cached on the computer to speed up read access. Other nodes can change their values asynchronously, e. g. as a result of some operation performed by a camera internally. These nodes obviously cannot be cached. An application should call the function GenApiNodeMapPoll()
at regular intervals. This results in the values of non-cachable nodes being updated in the node map, which in turn may cause callbacks to be executed as explained above.
Basler GigE cameras can be set to send the image data stream to multiple destinations. More information on this subject can be found in the pylon C++ Programmer's Guide.
The action command feature lets you trigger actions in multiple GigE devices (e.g. cameras) at roughly the same time or at a defined point in time (scheduled action command) by using a single broadcast protocol message (without extra cabling). Action commands are used in cameras in the same way as for example the digital input lines.
After setting up the camera parameters required for action commands the methods PylonGigEIssueActionCommand() or PylonGigEIssueScheduledActionCommand() can be used to trigger action commands.
This is shown in the sample ActionCommands Sample.
Consult the the camera User's Manual for more detailed information on action commands.
Features, like 'Gain', are named according to the GenICam Standard Feature Naming Convention (SFNC). The SFNC defines a common set of features, their behavior, and the related parameter names. This ensures the interoperability of cameras from different camera vendors. Cameras compliant with the USB3 Vision standard are based on the SFNC version 2.0. Basler GigE and Firewire cameras are based on previous SFNC versions. Accordingly, the behavior of these cameras and some parameters names will be different.
Code working with multiple camera device types that are compatible with different SFNC versions can read the SFNC version from a camera device to select the correct parameter names. The SFNC version can be read from the camera node map using the integer nodes DeviceSFNCVersionMajor, DeviceSFNCVersionMinor, and DeviceSFNCVersionSubMinor.
Example for selecting the parameter name depending on the SFNC version:
The following tables show how to map previous parameter names to their equivalents defined in SFNC 2.0. Some previous parameters have no direct equivalents. There are previous parameters, however, that can still be accessed using the so-called alias that is provided via the GenApiNodeGetAlias() function. The alias is another representation of the original parameter. Usually, the alias provides an Integer representation of a Float parameter.
The following table shows how to map changes for parameters:
Previous Parameter Name | SFNC 2.0 Equivalent | Parameter Type | Comments |
---|---|---|---|
AcquisitionFrameCount | AcquisitionBurstFrameCount | Integer | |
AcquisitionFrameRateAbs | AcquisitionFrameRate | Float | |
AcquisitionStartEventFrameID | EventFrameBurstStartFrameID | Integer | |
AcquisitionStartEventTimestamp | EventFrameBurstStartTimestamp | Integer | |
AcquisitionStartOvertriggerEventFrameID | EventFrameBurstStartOvertriggerFrameID | Integer | |
AcquisitionStartOvertriggerEventTimestamp | EventFrameBurstStartOvertriggerTimestamp | Integer | |
AutoExposureTimeAbsLowerLimit | AutoExposureTimeLowerLimit | Float | |
AutoExposureTimeAbsUpperLimit | AutoExposureTimeUpperLimit | Float | |
AutoFunctionAOIUsageIntensity | AutoFunctionAOIUseBrightness | Boolean | |
AutoFunctionAOIUsageWhiteBalance | AutoFunctionAOIUseWhiteBalance | Boolean | |
AutoGainRawLowerLimit | Alias of AutoGainLowerLimit | Integer | |
AutoGainRawUpperLimit | Alias of AutoGainUpperLimit | Integer | |
AutoTargetValue | Alias of AutoTargetBrightness | Integer | |
BalanceRatioAbs | BalanceRatio | Float | |
BalanceRatioRaw | Alias of BalanceRatio | Integer | |
BlackLevelAbs | BlackLevel | Float | |
BlackLevelRaw | Alias of BlackLevel | Integer | |
ChunkExposureTimeRaw | Integer | ChunkExposureTimeRaw has been replaced with ChunkExposureTime. ChunkExposureTime is of type float. | |
ChunkFrameCounter | Integer | ChunkFrameCounter has been replaced with ChunkCounterSelector and ChunkCounterValue. | |
ChunkGainAll | Integer | ChunkGainAll has been replaced with ChunkGain. ChunkGain is of type float. | |
ColorAdjustmentEnable | Boolean | ColorAdjustmentEnable has been removed. The color adjustment is always enabled. | |
ColorAdjustmentHueRaw | Alias of ColorAdjustmentHue | Integer | |
ColorAdjustmentReset | Command | ColorAdjustmentReset has been removed. | |
ColorAdjustmentSaturationRaw | Alias of ColorAdjustmentSaturation | Integer | |
ColorTransformationValueRaw | Alias of ColorTransformationValue | Integer | |
DefaultSetSelector | Enumeration | See additional entries in UserSetSelector. | |
ExposureEndEventFrameID | EventExposureEndFrameID | Integer | |
ExposureEndEventTimestamp | EventExposureEndTimestamp | Integer | |
ExposureTimeAbs | ExposureTime | Float | |
ExposureTimeRaw | Alias of ExposureTime | Integer | |
FrameStartEventFrameID | EventFrameStartFrameID | Integer | |
FrameStartEventTimestamp | EventFrameStartTimestamp | Integer | |
FrameStartOvertriggerEventFrameID | EventFrameStartOvertriggerFrameID | Integer | |
FrameStartOvertriggerEventTimestamp | EventFrameStartOvertriggerTimestamp | Integer | |
GainAbs | Gain | Float | |
GainRaw | Alias of Gain | Integer | |
GammaEnable | Boolean | GammaEnable has been removed. Gamma is always enabled. | |
GammaSelector | Enumeration | The sRGB setting is automatically applied when LineSourcePreset is set to any other value than Off. | |
GlobalResetReleaseModeEnable | Boolean | GlobalResetReleaseModeEnable has been replaced with the enumeration ShutterMode. | |
LightSourceSelector | LightSourcePreset | Enumeration | |
LineDebouncerTimeAbs | LineDebouncerTime | Float | |
MinOutPulseWidthAbs | LineMinimumOutputPulseWidth | Float | |
MinOutPulseWidthRaw | Alias of LineMinimumOutputPulseWidth | Integer | |
ParameterSelector | RemoveParameterLimitSelector | Enumeration | |
ProcessedRawEnable | Boolean | ProcessedRawEnable has been removed because it is not needed anymore. The camera uses nondestructive Bayer demosaicing now. | |
ReadoutTimeAbs | SensorReadoutTime | Float | |
ResultingFrameRateAbs | ResultingFrameRate | Float | |
SequenceAddressBitSelector | Enumeration | ||
SequenceAdvanceMode | Enumeration | ||
SequenceAsyncAdvance | Command | Configure a asynchronous signal as trigger source of path 1. | |
SequenceAsyncRestart | Command | Configure a asynchronous signal as trigger source of path 0. | |
SequenceBitSource | Enumeration | ||
SequenceControlConfig | Category | ||
SequenceControlSelector | Enumeration | ||
SequenceControlSource | Enumeration | ||
SequenceCurrentSet | SequencerSetActive | Integer | |
SequenceEnable | Boolean | Replaced by SequencerConfigurationMode and SequencerMode. | |
SequenceSetExecutions | Integer | ||
SequenceSetIndex | SequencerSetSelector | Integer | |
SequenceSetLoad | SequencerSetLoad | Command | |
SequenceSetStore | SequencerSetSave | Command | |
SequenceSetTotalNumber | Integer | Use the range of the SequencerSetSelector. | |
TestImageSelector | TestPattern | Enumeration | TestPattern instead of TestImageSelector is used for dart and pulse camera models. |
TimerDelayAbs | TimerDelay | Float | |
TimerDelayRaw | Alias of TimerDelay | Integer | |
TimerDelayTimebaseAbs | Float | The time base is always 1us. | |
TimerDurationAbs | TimerDuration | Float | |
TimerDurationRaw | Alias of TimerDuration | Integer | |
TimerDurationTimebaseAbs | Float | The time base is always 1us. | |
TriggerDelayAbs | TriggerDelay | Float | |
UserSetDefaultSelector | UserSetDefault | Enumeration |
The following table shows how to map changes for enumeration values:
Previous Enumeration Name | Previous Enumeration Value Name | Value Name SFNC 2.0 | Comments |
---|---|---|---|
AcquisitionStatusSelector | AcquisitionTriggerWait | FrameBurstTriggerWait | |
AutoFunctionProfile | ExposureMinimum | MinimizeExposureTime | |
AutoFunctionProfile | GainMinimum | MinimizeGain | |
ChunkSelector | GainAll | Gain | The gain value is reported via the ChunkGain node as float. |
ChunkSelector | Height | Height is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetX | OffsetX is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetY | OffsetY is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | PixelFormat | PixelFormat is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Stride | Stride is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Width | Width is part of the image information regardless of the chunk mode setting. | |
EventNotification | GenICamEvent | On | |
EventSelector | AcquisitionStartOvertrigger | FrameBurstStartOvertrigger | |
EventSelector | AcquisitionStart | FrameBurstStart | |
LightSourceSelector | Daylight | Daylight5000K | |
LightSourceSelector | Tungsten | Tungsten2800K | |
LineSelector | Out1 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out2 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out3 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out4 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSource | AcquisitionTriggerWait | FrameBurstTriggerWait | |
LineSource | UserOutput | Use UserOutput1, UserOutput2, or UserOutput3 etc. instead. | |
PixelFormat | BayerBG12Packed | The pixel format BayerBG12p is provided by USB camera devices. The memory layout of pixel format BayerBG12Packed and pixel format BayerBG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGB12Packed | The pixel format BayerGB12p is provided by USB camera devices. The memory layout of pixel format BayerGB12Packed and pixel format BayerGB12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGR12Packed | The pixel format BayerGR12p is provided by USB camera devices. The memory layout of pixel format BayerGR12Packed and pixel format BayerGR12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerRG12Packed | The pixel format BayerRG12p is provided by USB camera devices. The memory layout of pixel format BayerRG12Packed and pixel format BayerRG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BGR10Packed | BGR10 | |
PixelFormat | BGR12Packed | BGR12 | |
PixelFormat | BGR8Packed | BGR8 | |
PixelFormat | BGRA8Packed | BGRa8 | |
PixelFormat | Mono10Packed | The pixel format Mono10p is provided by USB camera devices. The memory layout of pixel format Mono10Packed and pixel format Mono10p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono12Packed | The pixel format Mono12p is provided by USB camera devices. The memory layout of pixel format Mono12Packed and pixel format Mono12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono1Packed | Mono1p | |
PixelFormat | Mono2Packed | Mono2p | |
PixelFormat | Mono4Packed | Mono4p | |
PixelFormat | RGB10Packed | RGB10 | |
PixelFormat | RGB12Packed | RGB12 | |
PixelFormat | RGB16Packed | RGB16 | |
PixelFormat | RGB8Packed | RGB8 | |
PixelFormat | RGBA8Packed | RGBa8 | |
PixelFormat | YUV411Packed | YCbCr411_8 | |
PixelFormat | YUV422_YUYV_Packed | YCbCr422_8 | |
PixelFormat | YUV444Packed | YCbCr8 | |
TestImageSelector | Testimage1 | GreyDiagonalSawtooth8 | GreyDiagonalSawtooth8 instead of Testimage1 is used for dart and pulse camera models. |
TriggerSelector | AcquisitionStart | FrameBurstStart |
The image transport of USB camera devices and Firewire or GigE camera devices is different. Firewire and GigE camera devices automatically send image data to the PC when available. If the PC is not ready to receive the image data because no grab buffer is available, the image data sent by the camera device is dropped. For USB camera devices the PC has to actively request the image data. Grabbed images are stored in the frame buffer of the USB camera device until the PC requests the image data. If the frame buffer of the USB camera device is full, newly acquired frames will be dropped. Old images in the frame buffer of the USB camera device will be grabbed first the next time the PC requests image data. After that, newly acquired images are grabbed.
Image data is transferred between a PC and a USB camera device using a certain sequence of data packets. In the rare case of an error during the image transport, the image data stream between PC and USB camera device is reset automatically, e.g. if the image packet sequence is out of sync. The image data stream reset causes the Block ID delivered by the USB camera device to start again at zero. Pylon indicates this error condition by setting the Block ID of the grab result to its highest possible value (UINT64_MAX) for all subsequent grab results. A Block ID of UINT64_MAX is invalid and cannot be used in any further operations. The image data and other grab result data are not affected by the Block ID being invalid. The grabbing needs to be stopped and restarted to recover from this error condition if the application uses the Block ID. The Block ID starts at zero if the grabbing is restarted.
Calling the PylonStreamGrabberCancelGrab() function resets the image stream between PC and USB camera device, too. Therefore, the value of the Block ID is set to UINT64_MAX for all subsequent grab results after calling PylonStreamGrabberCancelGrab.